text
stringlengths 181
622k
|
---|
Biofuels and Bioenergy Jobs
Biofuels are solid, liquid or gaseous fuels derived from relatively recently dead biological material unlike fossil fuels, which are derived from long dead biological material.
Biofuels can be produced from any biological carbon source; although, the most common sources are plants. Various plants and plant-derived materials are used for biofuel manufacturing. Globally, biofuels are most commonly used to power vehicles, heating homes, and cooking stoves. Biofuel industries are expanding in Europe, Asia and the Americas.
Recent technology developed even allows for the conversion of pollution into renewable bio fuel. Agrofuels are biofuels which are produced from specific crops such as soybean, sugar cane, algae, or jatropha. rather than from waste processes such as landfill off-gassing or recycled vegetable oil.
Jobs in Bioenergy
Universities, laboratories, and industry are working together to find solutions to the difficult problems surrounding the production and use of biomass for energy and products. These R&D efforts require chemists, agricultural specialists, microbiologists, biochemists, and engineers, just to name a few.
Biofuel, biopower, and biobased product plants are most cost-effective when located near their source of biomass. Thus, bioenergy industry development has a special appeal because it creates direct and indirect jobs in rural areas and countries, and may prove to be a profitable complement for many existing agricultural and forestry businesses.
Engineers and construction workers are needed to design and build bioenergy plants, while electrical/electronic and mechanical technicians, engineers (mechanical, electrical, and chemical), mechanics, and equipment operators are needed to run and maintain these plants. Some may even require individuals cross-trained in areas such as engineering and biology, or chemistry and agriculture.
Jobs in bioenergy today cut across a wide spectrum of specialties and skills. And if R&D and industrial efforts succeed in making bioenergy more commercially profitable, we may see a dramatic increase in the number of bioenergy-related jobs. We'll need more farmers and foresters to produce and harvest biomass resources, more truckers to transport the resources to the power and fuel plants, and more operators to run facilities. |
Construction has long been part of humanity and how we make our mark on the world. From the smallest hut to the largest skyscraper in the world, there is something about construction that shows our accomplishments as a species. We are a company that values this long tradition and is always working on ways to improve it. As the world gets older and climate change begins to take hold, there are many more incentives to change the way in which we are doing things.
We cannot stop expanding as a race, and the fact that we know that should give us more incentive to take care of our planet. As humanity expands, nature and the natural world around us begins to get smaller and smaller. So now that we have determined that stopping building is not an option, we can then begin to look at ways in which we can make construction more environmentally friendly.
This is the idea about our passion, which is green construction. Led by our colleagues at BC Asphalt Co, it is built around the idea that we can perform construction projects in such a way that we are not depleting the planet of natural resources and that our structures and use renewable energy in order to function. We can begin to create a world in which humans and live in harmony with nature. It will be a long time before we can begin to restore the planet, but what we can do is to make sure that the planet does not get much worse.
The natural world
While it may seem surprising, much of what we already use for construction comes from the earth. From the minerals we use to make our steel to the wood that is still found in virtually every man made structure, we can begin to see how the leap from regular construction to green construction is not a very large one. Thanks to innovations from a number of paving contractors Sonoma County we can change the way that we use these naturally occurring ingredients to better serve the planet as well as the people.
The idea behind green construction is not necessarily to tear the whole industry down and rebuild it, but rather to shift the idea between getting the job done and getting the job done in such a way that it does not destroy the environment. This is very difficult considering that we are seeing natural resources deplete from our very eyes and knowing that even though we may be better than we were in terms of saving the planet, that we could be doing a whole lot better.
This is why we use as much renewable energy as possible not only while we are building, but after we are building as well. That in order to ensure that these structures are able to be enjoyed by future generations, that we take heed in how we complete our projects and run our buildings. Green construction is still sort of in its infancy, but we can ensure our customers that our techniques are the best way at the moment to keep the planet safe. |
January 2002 Volume 24 Number 1
This article is an attempt to give readers a place to start when confronted with the problems particular to plastics. Only a few aspects of the care of plastics can be presented in a newsletter format. However, I have listed the following sources and the references at the end of the article for those who need more in-depth coverage of the subject. The resources noted below should be surveyed regularly to keep abreast of rapidly accumulating new developments.
The most current and complete compilation of information on the care of plastics in museums and in private collections is the book Plastics—Collecting and Conserving (Quye and Williamson 1999). This contains much information from recent conferences (Saving the twentieth century—the conservation of modern materials (Grattan 1993), From marble to chocolate—the conservation of modern sculpture (Heuman 1995), Resins—ancient and modern (Wright and Townsend 1995), etc.) and updates and expands on information in the book Conservation of Plastics (Morgan 1999). As well, a short Museums and Galleries Commission Fact Sheet entitled Conservation of Plastic Collections (Winsor 1999) is available on-line at http://www.museums.gov.uk/pdf/conserv/Conservation_of_Plastics.pdf.
There is much ongoing research in the field of conservation and treatment of plastics. Most often it is reported in special conferences like those mentioned, in the proceedings of international conservation conferences, especially the Modern Materials Working Groups of the ICOM-CC, IIC, in the Newsletter and the Plastiquarian from the Plastics Historical Society, and in the reports of the Historical Plastics Research Scientists Group.
200 Years of Plastics History: A Concise History of Plastics (Fahey 2001) gives a very nice history of plastics with good hyperlinks to make navigation easy. It nicely states when and how each plastic was introduced into commerce. It lacks a nice really big comprehensive chronology, but does have a nice little one. It is available on-line at http://www.nswpmitb.com.au/historyofplastics.html.
Several surveys of the occurrence and condition of plastics, carried out in European museums, help put the issue in perspective. The results of two of these are shown in Table 1. The surveys showed that a wide variety of plastics are present in museum collections, but that only a few percent of the objects were in dire need of treatment, and those comprised a small group of plastic types. At the British Museum, all objects in need of immediate treatment and many of those requiring essential work were PVC.
|Conservation priory||Victoria and Albert Museum|
(Then and Oakley 1993)
3032 plastics-containing objects
(Shashoua 1993, Shashoua and Ward 1995)
object in perfect condition
|2 Low priority:|
slightly damaged but stable, need cleaning, await resources, no immediate danger
|3 Essential work needed:|
damaged and unstable, no immediate danger
|4 High priority:|
extremely unstable and requiring urgent treatment, active deterioration, destruction imminent, mostly PVC
At the V&A most damage is surface dirt/grease, abrasion, and scratches, with much of this from poor handling and storage prior to museum accession, plus yellowing and staining mostly from self-adhesive tape and adhesives. Of the total damage observed 23% was physical damage such as cracks, fractures, and chips, and only 13% was chemical damage, typically to rubber parts such as tires and tracks of model tanks and tractors. This pattern has also been observed during IR spectroscopic analytical surveys by the author.
The most serious condition problems, one finds, are related to a small number of plastics. Kenegan and Quye (1999) list the four plastics most vulnerable to ageing as poly(vinyl chloride), cellulose nitrate, cellulose acetate, and polyurethane, especially polyurethane foam. Because of its potential for causing damage to other objects, notably metals, rubber, especially fully vulcanized hard rubber (ebonite and vulcanite), should be added to this list. As they degrade, all these plastics produce harmful degradation products that cause damage to other plastic and nonplastic objects in the vicinity. I call these plastics that damage their neighbors malignant plastics. Because of their malignancy, conservation strategies must deal with these plastics as a first priority.
Although all plastics degrade over time—as indeed do all organic materials—for most other plastics, the damage is mainly to the plastic itself, and not to its neighbors. These benign plastics are not so dangerous to the collection as are the malignant plastics.
Environmental agents of deterioration
As with all organic materials, all plastics, benign and malignant alike, are degraded by exposure to light, heat, moisture, and pollutants, depending on the object composition, fabrication, and environmental history. Decreasing exposure to agents of deterioration will decrease the degradation of plastics. General guidelines on control of agents of deterioration are given in the on-line version of the Framework for Preservation of Museum Collections at http://www.cci-icc.gc.ca/framework/index_e.shtml.
Light levels should be reduced to the minimum required for display and access and storage should be dark. UV radiation should be eliminated.
Variations in temperature and humidity should be avoided. Thermal expansion and contraction or swelling and shrinking as water content increases and decreases with raised and lowered RH creates mechanical stresses which lead to warping and fracture, especially in constrained pieces.
Some plastics are more susceptible to specific agents so it is therefore beneficial to concentrate on controlling that specific agent for that specific plastic (see Table 2 on following page).
|Plastic||UV radiation and excess
|Moisture (high relative humidity) and
solvents dissolution, environmental stress cracking
stains, corrosion, stickiness, gases
|acrylics||resistant||resistant||dissolved, swelled, stress cracking||none|
|casein formaldehyde, protein derivatives||formaldehyde gas, cracking due to swelling/shrinking, moldy, brittle when dry||swell by water, resistant to organics||formaldehyde, hydrogen sulfide, other sulfur containing gases|
|cellulose acetate||yellowed brittle||hydrolysis produces acetic acid, oily plasticizer liquids||dissolved, swelled||acetic acid gas, oily plasticizer and degradation products on surface|
|cellulose nitrate||yellowed, brittle||hydrolysis produces acidic and oxidizing nitrogen oxide gasses||dissolved, swelled||acidic and oxidizing nitrogen oxide gasses, plasticizer and degradation products on surface|
|nylon (polyamide)||yellowed, brittle||potential hydrolysis at extreme conditions||softened, swelled||none|
|discolored and more matte||discolored and more matte||fillers swell and surface mottles with solvents||phenol and formaldehyde with severe degradation|
|yellowed, brittle||resistant||swollen by some organics||none|
|polystyrene||yellowed, brittle||resistant||dissolved, swelled, stress cracked||none|
|polyurethane||yellowed, brittle, sticky, crumbles||yellowed, brittle, sticky, crumbles||swelled, stress cracked||nitrogenous organic gases and liquids|
|poly(vinyl chloride)||yellowed, brittle||resistant||dissolved, swelled, embrittled by plasticizer extraction||oily plasticizer liquids, maybe hydrochloric acid gas under extreme conditions of moisture and light exposure|
|rubber, ebonite, vulcanite||brittle, discolored, increase in matteness||hydrogen sulfide and other gases, sulfuric acid on surfaces||surface mottled by solvents||hydrogen sulfide and other sulfur containing gases, sulfuric acid on surfaces|
|should be considered as prone to damage by UV radiation usually resulting in yellowing and embrittlement||condensation plastics like esters, amides, and urethanes are subject to hydrolysis with subsequent weakening||thermoplastics may dissolve, thermosets may swell, stress cracking||harmful gases from plastics with chlorine, sulfur, and pendant (not main chain) ester groups|
Environmental conditions that reduce the degradation of the malignant plastics will invariably be beneficial to benign plastics, so concentrating on the malignant plastics, does not neglect the rest of the collection.
Cellulose esters produce acidic gaseous degradation products. Cellulose nitrate produces acrid smelling nitrous oxides which converts to nitric acid by reaction with moisture in the atmosphere or in other objects. This is a strong oxidizing acid which causes tendering and decomposition of cellulose and protein, and corrosion of metals, etc. Cellulose acetate produces acetic acid (vinegar odor, hence the term "vinegar syndrome" to describe cellulose acetate degradation) and cellulose butyrate and cellulose acetate butyrate produce butyric acid which has a distinctive and characteristic vomit odor. These organic gases are not so strong as nitric acid, but they also cause tendering, decomposition, and corrosion. Because they produce acidic degradation products, the cellulose ester plastics become acidic themselves.
Cellulose nitrate is found in many forms, including sheets or films (e.g., photographic film base), varnishes and lacquers, and solid objects, especially those which imitate natural materials like ivory (often called "French ivory"), tortoiseshell, and horn.
Cellulose nitrate degrades to produce acidic and oxidizing nitrogen oxide gases which can seriously damage objects that are nearby or in contact. This deterioration is accelerated by increased temperatures, elevated relative humidity, and acidic conditions. Enclosures (drawers, cabinets, display cases, etc. ) that contain cellulose nitrate should be well ventilated to prevent buildup of acid vapors. Special storage conditions and locations should be considered for cellulose nitrate, including cold storage.
Cellulose nitrate was commonly plasticized with camphor (e.g., Celluloid). This material sublimes from the plastic, causing the object to become more brittle and to shrink. The shrinkage tension set up in the brittle plastic often leads to severe cracking or crizzling. This problem may not be so severe in cellulose nitrate plasticized by materials other than camphor which are less volatile. Advice on the conservation of cellulose nitrate has been given by Reilly (1991) and Williams (1994). A simple spot test for identifying cellulose nitrate in minute chips or scrapings from museum objects is described by Williams (1994). The use of papers and threads containing the sulfone-phthalein indicators Cresol Red and Cresol Purple to detect degradation in cellulose nitrate objects on display and in storage has been described by Fenn (1995).
Cellulose acetate is commonly encountered in two grades characterized by different degrees of substitution, namely, cellulose triacetate (CTA) most commonly found in sheets like photographic film base and fibers, and cellulose diacetate (CDA) in thicker sheets and 3-dimensional shapes and objects often simulating tortoiseshell, ivory, wood, and mother-of-pearl. The CTA and CDA are easily confused with cellulose nitrate when compared by visual appearance alone. Cellulose acetate objects usually contain plasticizers.
Cellulose acetate degrades primarily by acid hydrolysis, which causes deacetylation. Deacetylation cleaves pendant acetate groups from the cellulose polymer backbone and depolymerizes the backbone. Deacetylation causes the emission of acetic acid gas from the plastic creating acidic surfaces on the plastic and acidic atmospheres in enclosures. This process is analogous to that which happens with cellulose nitrate. Depolymerization leads to decrease in mechanical strength and fracture plus deformations and warpage. Acetic acid is a volatile gas that diffuses through the display or storage space, and can cause corrosion of metals, or acidic catalyzed degradation of other paper and textiles.
Additives, especially plasticizers, migrate and may be lost, or are hydrolyzed or oxidized to acidic compounds. This leads to warpage, embrittlement, and fracture, and to the development of acidic and sticky surfaces, sometimes with surface deposits of plasticizer or acidic degradation products.
Cellulose acetate should be displayed and stored under ventilated conditions or with acetic acid scavengers if stored in unventilated enclosures. Cellulose acetate objects should not be stored in enclosures with, or in proximity to, acid sensitive (particularly acetic acid sensitive) materials such as metal, textiles, and paper. Problems caused by cellulose acetate in collections and conservation treatments for cellulose acetate objects are described by Aubier et al. (1996) and Pullen and Heuman (1988).
The main conservation strategy for cellulose esters is to reduce exposure to moisture, primarily by reducing relative humidity (RH). Reducing temperature is also effective, since, in common with all chemical reactions, decreasing the temperature decreases the rate of degradation reactions. Decrease temperature also reduces the rate of plasticizer loss and retains plastic flexibility (although the plastic may be less flexible at the reduced temperature). It is also essential to ensure that there is adequate ventilation to remove harmful gaseous degradation products to prevent damage to objects in the vicinity. Objects that are badly degraded or in the process of degrading rapidly should be removed and segregated from the rest of the collection.
Pure poly(vinyl chloride), PVC, degrades to produce hydrochloric acid at temperatures needed to form it into usable products by molding or extrusion. As a consequence, heat stabilizers are always added to overcome this problem. Also, pure PVC is a rigid plastic, so, to create a flexible plastic, compounds called plasticizers are added. Plasticizers are typically oily polar organic liquids, which are very good solvents for many materials. Many other additives are used in PVC formulations to enable a wide variety of products to be fabricated.
The greatest problems with PVC are related to the additives. Migration of plasticizer and other additives creates accretions on the PVC surface. These deposits of additives on the surface of the plastic, called bloom, can seriously stain or corrode the surface of other materials they contact. Shashoua (2001) has found that plasticizer bloom will form if the plasticizer content is more that 30% of the PVC weight and that phthalate ester plasticizers can hydrolyze to form crystals of phthalic acid and phthalic anhydride.
The bloom can be removed by wiping or mild solvent treatment (not recommended), but usually returns. Formation of bloom is driven by an inherent incompatibility between the plastic and the additive. Additives and their consequences for museum objects have been discussed by Williams (1993). The presence of bloom on the PVC is not damaging to the PVC, although it does indicate that the PVC is degrading. It need not be removed if contact with other objects is prevented by shields, interleaves, and packaging.
Although PVC is susceptible to degradation by light and heat, this is not usually the most serious problem in museums. Exposure of PVC to light (especially ultraviolet radiation) and heat will cause a degradation reaction, called dehydrochlorination, which produces hydrochloric acid and causes the PVC to change color from yellow to brown to black. Manufacturers control this by adding light and heat stabilizers. Unfortunately these stabilizers are consumed as they do their job, until at some time the stabilizers are exhausted, and additional exposure suddenly results in deterioration. Thus a PVC object that has been surviving nicely under lights for several years may suddenly begin rapid deterioration. This was a common scenario for the vinyl roofs on cars. They would be in good shape for several years then suddenly rot away. In experiments on PVC degradation Shashoua (Shashoua 1996, Shashoua and Ward 1995) did not detect dehydrochlorination at room temperature in the dark after 6 months. However in accelerated aging experiments dehydrochlorination did occur but incorporation of zeolites or epoxidized soya bean oil (ESBO) inhibited the discoloration of PVC caused by dehydrochlorination. Shashoua suggests including zeolite pellets in storage boxes containing PVC objects to inhibit discoloration.
Polyurethane occurs in collections as polyurethane foams, coatings, and fibers. There are two types of polyurethane foams—one based on polyether polyols and the other based on polyester polyols. The polyether polyurethanes are particularly susceptible to oxidation, especially in the presence of light (photooxidation). Foam degradation is particularly devastating usually leading to complete crumbling of the foam object starting at its surface. Polyester polyurethanes are much less susceptible to oxidative degradation, but are subject to hydrolytic degradation at high RH.
Oxidation is initiated and accelerated by exposure to light, especially UV radiation. Kenegan and Quye (1999) note that coated and painted foams tend to be more resilient because they have a protective barrier against oxygen. This could also be due to protection against light in the coated areas. Since oxygen is present in the atmosphere, oxidative degradation can be stopped only by placing the polyurethane into an oxygen free enclosure (anoxic storage). Anoxic storage requires that the object be sealed into a package where there is no ventilation. This exacerbates degradation if the degradation products catalyze degradation, so sealing degrading plastics into packages must be very carefully tested and monitored.
Although avoiding exposure to light may slow oxidative degradation, degradation will not stop in the absence of light. There are many examples of polyurethane foam used for supporting objects in dark storage drawers that have completely degraded to powder.
Kerr and Batcheller (1993) describe properties and degradation of polyurethane in detail, particularly in the context of textiles (foam pads, elastic fibers, artificial suedes, and fabric coatings). Recommended environmental conditions are typical for textiles, with good ventilation to remove volatile degradation products. Also, there should be no contact between polyurethane objects and other objects,
including other parts of the object itself, to prevent sticking. Silicone and teflon coated fabrics have been used as nonstick interleaves.
When mixed and heated with a few percent of sulfur or sulfur compounds, natural rubber latex can be crosslinked, a process initially called vulcanizing, to make the familiar elastic rubber. Hard rubber (ebonite or vulcanite), an inelastic thermoplastic, is produced if as much as 30% sulfur is used. Both the elastic rubber and the hard rubber are malignant materials because they emit sulfurous gaseous degradation products. Hard rubber also develops extremely acidic surfaces covered in droplets or a film of sulfuric acid.
It was recognized early on that rubber rapidly oxidized in air and stabilizers were developed to prevent degradation. Unfortunately many of the early stabilizers were volatile colored materials. It is very common to find bright yellow stains on tissues and plastics used for wrapping and storing early rubber objects, especially dark colored ones where the yellowness of the stabilizer was not so apparent. Rubber objects that have these volatile yellow antioxidants must not be stored too close to, or sealed up with, other objects that could absorb these additives and be stained.
Hard rubber with its high sulfur content poses two major conservation problems. It can emit reducible sulfur compounds that will tarnish silver, and, the sulfur compounds can be oxidized by atmospheric oxygen to produce sulfur oxides gases which react with atmospheric water vapor to produce acids which can remain on the surface, thereby creating very acidic surfaces. This creates damage whenever the sulfuric acid coated surface of the hard rubber contacts acid sensitive materials. A common use of hard rubber was as an electrical insulator in early telegraphy equipment and much corrosion occurs where the hard rubber is in contact with copper alloys used as conductors.
A spot test to detect materials, including hard rubber, that release reducible sulfur compounds was described by Daniels and Ward (1982). The conservation concerns and how to deal with acidic surfaces was discussed by Bacon (1988) and Stevenson (1993).
Having dwelt on all the problems of specific plastics, perhaps it is time to help you determine what plastics you may have in your collection. There are two options for plastics identification—wet chemical spot tests and instrumental analysis, particularly infrared (IR) spectroscopy. Spot tests are notoriously ambiguous and spectroscopy is preferred (Coxon 1993). Until recently spectroscopy had the disadvantage of requiring samples to be taken from objects and sent to remote laboratories for analysis. Now portable IR spectrometers are available so IR spectroscopy can be carried out nondestructively on-site in the museum without taking samples. The author has been conducting on-site IR spectroscopic analytical surveys of museum collections for several years (Williams 1997, 1999).
Although spot tests are generally ambiguous, fortunately, there are good spot tests that yield relatively unambiguous results for the detection of the five most malignant plastics. A summary in given in Table 3 below and complete information can be found in the references.
|Plastic||Spot test||Test result||Reference|
|Cellulose nitrate||Diphenylamine/sulfuric acid reagent||colorless to blue solution||CCI Note 17/2|
SPNHC Leaflet No. 3
|Cellulose acetate||Alkaline hydroxylamine plus ferric chloride acidified||burgundy red color develops||Coxon 1993|
|Poly(vinyl chloride)||Beilstein test—copper wire heated in torch flame||colorless flame turns green||CCI Note 17/1|
SPNHC Leaflet No. 3
|Sulfur vulcanized rubber (ebonite, vulcanite)||Iodine/sodium azide reagent for reducible sulfur compounds||bubbles develop in reagent||Daniels and Ward 1982|
|Polyurethane||Dimethyl amino benzaldehyde in glacial acetic acid||canary yellow color develops||Roff, et al 1971|
Aubier, D. D., J. M Blengino, A.C. Brandt, and N. Silvie. 1996. Degradation caused by cellulose diacetate: analysis and proposals for conservation treatment. Restaurator 1996 17: 130-143.
Bacon L. 1988. The deterioration of four Giorgi flutes made of Ebonite and a possible method for their conservation, Conservation Today, papers presented at the UKIC 30th Anniversary Conference 1988. 96-100.
Coxon, H. C. 1993. Practical Pitfalls in the Identification of Plastics. In Saving the Twentieth Century: The Conservation of Modern Materials, Proceedings of a Conference: Symposium '91 - Saving the Twentieth Century, Ottawa, Canada, 15 to 20 September 1991. D. W. Grattan, ed. Ottawa: Canadian Conservation Institute. 395-406.
Daniels, V., and S. Ward. 1982. A rapid test for the detection of substances which will tarnish silver. Studies in Conservation 27: 58-60.
Fahey, D. E. 2001. 200 Years of Plastics History: A Concise History of Plastics. 6th Revised Edition, November 1999. available on-line at http://www.nswpmitb.com.au/historyofplastics.html.
Fenn, J. 1995. The cellulose nitrate time bomb: using sulphonephthalein indicators to evaluate storage strategies. In J. Heuman, ed. From Marble to Chocolate. The Conservation of Modern Sculpture, Postprints of Tate Gallery Conference, Sept. 18-20, 1995. London: Archetype Books. 87-92.
Grattan, D. W., ed. 1993. Saving the twentieth century—the conservation of modern materials, Proceedings of a Conference: Symposium '91 - Saving the Twentieth Century, Ottawa, Canada, 15 to 20 September 1991. Ottawa: Canadian Conservation Institute.
Heuman, J., ed. 1995. From Marble to Chocolate. The Conservation of Modern Sculpture, Postprints of Tate Gallery Conference, September 18-20, 1995. London: Archetype Books.
Kenegan, B., and A. Quye. 1999. Degradation—Part 2: Degradation Causes in Plastics—Collecting and Conserving, A. Quye and C. Williamson, ed. Edinburgh: NMS Publishing Ltd. 122-135.
Kerr, N., and J. Batcheller. 1993. Degradation of polyurethanes in 20th Century museum textiles. In Saving the Twentieth Century: The Conservation of Modern Materials, Proceedings of a Conference: Symposium '91 - Saving the Twentieth Century, Ottawa, Canada, 15 to 20 September 1991. D. W. Grattan, ed. Ottawa: Canadian Conservation Institute. 189-203.
Morgan, J. 1991. Conservation of Plastics—An Introduction, London: Plastics Historical Society and The Preservation Unit of the Museums and Galleries Commission. 55 pp.
Pullen, D, and J. Heuman. 1988. Cellulose acetate deterioration in the sculptures of Naum Gabo. In Modern Organic Materials Meeting. Scottish Society for Conservation and Restoration, University of Edinburgh, 14 and 15 April 1988. Edinburgh: SSCR Publications. 57-66.
Quye, A. 1993. Examining the plastic collections of the National Museums of Scotland. Conservation Science in the UK, Preprints of Meeting, Glasgow, May 1993, N. H. Tennant, ed. 48.
Quye, A., and C. Williamson, ed. 1999. Plastics - Collecting and Conserving. Edinburgh: NMS Publishing Ltd.
Reilly, J. A. 1991 Celluloid Objects: Their Chemistry and Preservation, Journal of the American Institute for Conservation 30 (2, Fall): 145-162.
Roff, W. J., J. R. Scott, and J. Pacitti. (compilers). 1971. Handbook of Common Polymers: Fibres, Films, Plastics and Rubbers. Cleveland: CRC Press, Butterworth & Co. (Publishers) Ltd.
Shashoua, Y. 2001. Research Updates. Historical Plastics Research Scientists Group Newsletter, January: 3.
Shashoua, Y. 1996. A passive approach to the conservation of polyvinyl chloride. ICOM-CC Preprints of the 11th Triennial Meeting, Edinburgh, 1-6 September 1996. 961-966.
Shashoua, Y. 1993. Research in plastics and rubbers in the British Museum. Conservation Science in the UK, Preprints, Glasgow, May. 44-47.
Shashoua, Y., and C. Ward. 1995. Plastics: Modern Resins with Ageing Problems. Resins—Ancient and Modern, Preprints of the SSCR 2nd Resins Conference, Aberdeen, 13-14 September 1995, M. M. Wright and J. H. Townsend, ed. 33-37.
Stevenson, R. D. 1993. A. W. McCurdy's developing tank: degradation of an early plastic. In Saving the Twentieth Century: The Conservation of Modern Materials, Proceedings of a Conference: Symposium '91 - Saving the Twentieth Century, Ottawa, Canada, 15 to 20 Sept. 1991. D. W. Grattan, ed. Ottawa: Canadian Conservation Institute. 183-186.
Then, E., and V. Oakley. 1993. A Survey of Plastic Objects at the Victoria and Albert Museum. Conservation Journal, Victoria & Alberta Museum, 1993 (January): 11-14.
Williams, R. S. 1999. Non-destructive in-situ, on-site mid-infrared spectroscopic chemical analysis of objects in museums using a portable spectrometer with fiber optic probe, Proceedings of the 6th International Conference on 'Non-destructive Testing and Microanalysis for the Diagnostics and Conservation of the Cultural and Environmental Heritage Rome, May 27-20, 1999'. 1619-1631.
Williams, R. S. 1997. On-site non-destructive mid-IR spectroscopy of plastics in museum objects using a portable FTIR spectrometer with fiber optic probe. In Materials Issues in Art and Archaeology V, Materials Research Society Symposium Proceeding Vol. 462. Warrendale: Materials Research Society, 1997. 25-30.
Williams, R. S. 1997. Display and Storage of Museum Objects Containing Cellulose Nitrate, CCI Notes 15/3. Ottawa: Canadian Conservation Institute. 6 pp.
Williams, R. S. 1994. The Diphenylamine Spot Test for Cellulose Nitrate in Museum Objects. CCI Note 17/2. Revised and reissued. Ottawa: Canadian Conservation Institute. 2 pp.
Williams. R. S. 1994. Display and Storage of Museum Objects Containing Cellulose Nitrate. CCI Note 15/3. Ottawa: Canadian Conservation Institute. 4 pp.
Williams, R. S. 1993. The Beilstein Test: Screening Organic and Polymeric Materials for the Presence of Chlorine, with Examples of Products Tested. CCI Notes 17/1. Canadian Conservation Institute. 1993. 3 pp.
Williams, R. S. 1993. Composition Implications of Plastic Artifacts: A Survey of Additives and Their Effects on the Longevity of Plastics. In Saving the Twentieth Century: The Conservation of Modern Materials, Proceedings of a Conference: Symposium '91 - Saving the Twentieth Century, Ottawa, Canada, 15 to 20 September 1991. D. W. Grattan, ed. Ottawa: Canadian Conservation Institute. 135-152.
Williams, R.S., A.T. Brooks, S.L. Williams, and R.L. Hinrichs. 1998. Guide to the Identification of Common Clear Plastic Films. SPNHC Leaflets, 1998 (3, Fall), 4 pp. (Soc. of the Pres. of Natural History Collections). Also available on-line at http://www.spnhc.org/documents/leaflet3.pdf
Winsor, P. 1999. Conservation of Plastic Collections. Museums and Galleries Commission Fact Sheet, London: Museums and Galleries Commission. 6 pp. Also available on-line at http://www.museums.gov.uk/pdf/conserv/Conservation_of_Plastics.pdf .
Timestamp: Thursday, 11-Dec-2008 13:02:36 PST
Retrieved: Sunday, 20-Jan-2019 02:25:22 GMT |
- Simple is beautiful – whether it’s business model, idea or solution. Focus on solving a small problem and be good at it.
- Never forget what this word means – aspiration.
- The future won’t build itself – actual people will make it happen. If everyone had relied on others to do it, there would have been no innovation or creation then.
- When you are young, stupid or desperate, you go out and try things against all the odds.
- Corruption or bureaucracy or inefficiency are in some way technology problems.
Rule number one: We need to admit the fact that we are all consumers – we consume all kinds of resources, natural or artificial, to keep ourselves entertained. We design things to make ourselves happy, and life less boring. No exception. Human being is selfish by nature – it’s always me, me and me. Eventually, the earth will be burned out with nothing left.
Life should be enjoyable… I like the theme song in Thomas and Friends…
A few websites worthwhile for a visit…
This is the IEEE Spectrum magazine where you see some latest technology development and press release.
A good book on deep learning
https://arstechnica.com/ A UK site sharing latest technological advancements
http://www.technologyreview.com/ (MIT Technology Review)
One more thing…
Templates built with Bootstrap framework
An open-source hardware (pcDuino)
Just a few other things that seem interesting to me.
My favorite Dev tools:
Where the Internet or personal computing is moving…
If the center of computing has shifted from PC to mobile phone, then the mobile phone should run all the peripheral parts of a personal computer. On cloud computing, the processing is in the phone, while the data and applications run in the cloud.
Another type of innovation
There are two types of innovation. The first one is to fundamentally change the way people do things or give people something they don’t already have – even though sometime people may not need it at all. The other type – to create something people use to kill their time. Nowadays there are just too many people of this sort. Opportunities? A big YES. |
Tying occurs when a consumer buys one product (the “tying product”) and is required to either purchase an additional product that exists in a separate market (the “tied product”), or agrees not to purchase the additional tied product from any other seller. Tied selling is only problematic where the practice is likely to have an anti-competitive effect.
A fundamental requirement of tying is the existence of two products, the tying product and the tied product (the “separate products criterion”). The separate products criterion is not always straight-forward because all value-adding activity involves a degree of bundling of separate components, however no economic test exists to determine where one product should end and another begin.
One can easily imagine situations where the existence of a stand-alone market for the tied product can coexist with a bundled product. For example, it is possible to buy shoelaces (tied product) as a stand-alone product in shoe stores, but sellers of new shoes sell their shoes bundled with laces (tying product). Other examples include cars and GPS systems, cars and satellite radio services, and computers and browsers. This distinction has led to debate and varying approaches across jurisdictions.
The Separate Products Criterion
Tied selling is a reviewable practice established in section 77 of Part VIII of the Competition Act. There is only one case on tied selling in Canada, Canada (Director of Investigation & Research) v. Tele-Direct (Publications) Inc., 1997 CanLII 11 (CT), which establishes the following requirements for unlawful tying:
- the alleged tied seller is a major supplier;
- there are two separate products;
- there is tying; and
- there is an exclusion of competitors resulting in a substantial lessening of competition.
It is implicit in the determination of whether there are one or two products that efficiency considerations must be taken into account. Demand for separate products and efficiency of bundling are the two “flip sides” of the question of separate products. Assuming there is demand for separate products, if efficiency is proven to be the reason for bundling, there is one product. If not, there are two products. Efficiency is also critical because the existence of separate demand should not govern if the benefits of providing those product separately for consumers is outweighed by the higher costs.
In the US, section 1 of the Sherman Act primarily governs tying arrangements. Tying is deemed per se unlawful if:
- two separate products are involved;
- the sale or agreement to sell one product is conditioned on the buyer’s agreement to purchase another product or service;
- the seller has sufficient market power for the tying product; and
- the tying arrangement affects a not insubstantial amount of commerce.
Canadian jurisprudence has adopted the leading approach as defined by the Supreme Court of the US in the 1984 case Jefferson Parish Hospital District No. 2 v. Hyde, 466 U.S. 2. In assessing the separate products criterion, the court held that whether there are one or two products turns on the character of demand for the two items, rather than on the functional relationship between them. Thus, the most important factor in determining whether two distinct products are being tied together is whether customers want to purchase the products separately. If customers are not interested in purchasing the products separately, there is little risk the tie could foreclose any separate sales of the products.
Since Jefferson Parish, US courts have recognized that some tying arrangements have procompetitive benefits for consumers, such as reducing distribution costs. This has resulted in a shift towards a rule of reason approach, especially when the challenged conduct involves physically or technically integrating the tying and tied products (e.g., U.S. v. Microsoft, 253 F.3d 34, 89-95 (D.C. Cir. 2001)).
In the EU, Article 102 of the Treaty on the Functioning of the European Union regulates tied selling. The European Commission has outlined five requirements for an abuse of tied selling:
- the seller is dominant in the tying product market;
- the tying and tied products must be two separate products;
- the tying product is not offered without the tied product;
- the act of tying forecloses stand-alone competitors; and
- the tying conduct cannot be objectively justified.
The separate products criterion is addressed differently in Europe than in Canada and the US. The European Commission has held that the two products are distinct so long as consumers would purchase the tied product separately from the tying product (e.g., Case COMP/C-3/37.792 Microsoft). The concern with the EU’s approach is that the two products are considered separate so long as there is a separate demand for the tied product. For example, shoes and shoelaces could be considered separate products as long as there is a separate demand for shoelaces. The question should actually turn on whether there is a separate demand for shoes without shoelaces. There is in fact no separate market for shoes without shoelaces, so they ought not be considered separate products.
Implications on Innovation
The separate products criterion is not an efficient way to distinguish bundling that has anti-competitive effects from those which are benign or pro-competitive. Offering products together as part of a package can benefit consumers who like the convenience of buying several items at the same time, especially when it comes at a discounted price. Incorporating new features into products to increase their value to consumers is a hallmark of innovative competition, even if innovation makes obsolete separate standalone products designed to meet the same consumer needs. |
Events and interesting facts that have shaped the industry
Highlights from the month of November
1876 – The first world’s fair in the U.S.
Officially known as the International Exhibition of Arts, Manufactures and Products of the Soil and Mine, the Centennial Exhibition of 1876 was the first major world’s fair to be held in the United States. The Centennial celebrated the 100th anniversary of the Declaration of Independence and showcased the United States as a rapidly developing industrial power with abundant natural resources. Nearly 10 million people visited the Centennial from May 10 to November 10, 1876, a staggering feat of cultural tourism when one considers the U.S. population totaled just 40 million at the time.
1907 – The most colossal failure in the history of exhibitions
The Jamestown Ter-Centennial Exposition, marking the three hundredth anniversary of the founding of the Jamestown colony by settlers from England, was held in Norfolk, Virginia, from April 26 to November 30, 1907. Among many dignitaries who visited the exposition were U.S. President Theodore Roosevelt andauthor Mark Twain.
The event earned only $1,070,149 against its projected revenue of $3,780,000. The financial problems led to the director’s resignation mid-festival—an event that, in turn, led to a tiff between the festival board and President Roosevelt. A day after it closed, the New York Times called the 1907 Jamestown Ter-Centennial Exposition “the most colossal failure in the history of exhibitions.”
1935 – Bayonets may have stopped a riot
The U.S. Marine Band and Color Guard marched into the Plaza del Pacifico to mark the opening of the California Pacific Exposition in San Diego on the morning of May 29, 1935. Unlike the Marines, children paid 25 cents and adults 50 cents each to get in.
Corporal Joe Galli of the 30th Infantry brought the first season to a close at midnight on Armistice Day, November 11, by playing Taps. As soon as the last poignant notes had died, a technician turned off seven fingers of lights on top the Organ Amphitheater. The 76,033 people who were present did not engage in riots, as they did when the 1934 Chicago Century of Progress Exposition closed. It was suspected that San Diego Exposition directors feared a repeat of the Chicago disorder, so they had soldiers of the 30th Infantry present, wearing steel helmets and carrying fixed bayonets.
Tradeshow History reported by Exhibit City News
1997 – NISCA files antitrust lawsuit
Nevada Independent Service Contactors Association (NISCA) filed an antitrust lawsuit against a long list of companies including GES, Freeman, several show managers and Teamsters Local 631. The NISCA claimed the defendants conspired to preclude its members from providing their convention services to tradeshow exhibitors. NISCA hired the services of Jeffrey Jacobovitz, an antitrust specialist from Washington D.C. and Gregory Kramer, a Las Vegas lawyer to represent them.
NISCA also listed CB Display Service, Czarnowski, EIS, Nth Degree, Renaissance, Sho Aids and Zenith.
NISCA companies told Exhibit City News that they had calculated revenue losses to be in excess of $100,000.
2003 – Fabric: Fluid, Friendly, Fantastic
A quick glance around the showfloor – any showfloor – tells the first time visitor what the rest of us already know; fabric is an important part of the current tradeshow environment. And there’s no evidence that this is a temporary phenomenon.
The acknowledged pioneer in the field, Moss Inc., of Belfast, Maine transitioned the use of fabric from banners and backdrops to exhibit spaces when the original owners of the company, Marilyn and Bill Moss were forced to explore cost effective ways to display their backpacking tents at tradeshows.
No story about fabric would be complete without mentioning Mary Carey. Hired by Moss to help with work for the 1984 Olympics, Carey, a designer who wanted to bring art to the industry, spend 10 years selling the concept of “fabric” to the exhibit builder and supplier community.
2006 – Construction begins at expanded Javits Center
As construction continued on the expanded Jacob Javits Convention Center, tourism leaders praised New York’s leadership for paving the way toward a new wave of convention business.
When completed in 2010, the Javits Center’s exhibition space will expand from 760,000 to 1.1million square feet, representing a 45 percent increase. Meeting room space will increase by 600 percent, from 30,000 to 210,000 square feet.
“The expansion will launch a new era for New York’s $24 billion travel and tourism sector,” said NYC & Company Chairman Jonathan M Tisch.
|People on the Move| |
What do cashiers do?
Cashiers work in a variety of places including supermarkets, retail stores, gas stations, movie theaters and restaurants. As a cashier you'll probably use a cash register to ring people up, take their money and give them their change and a receipt. You might also have to wrap or bag their purchase. Cashiers sometimes handle returns and exchanges.
At the end of a shift, you'll have to count the money in your cash register and compare it with the sales data in the computer. Be careful with your money - although you probably won't get in trouble for occasionally being a few cents short, you could get fired if it happens too often.
Depending on where you work, you might have other responsibilities as well. If you're a cashier at a supermarket, you might be asked to clean your area as well as return unwanted items to shelves. If you work at a convenience store, you might have to create money orders and sell lottery tickets.
Almost half of all cashiers work part time. Most cashiers are asked to work weekends, evenings and holidays.
How much do cashiers make?
Many cashiers make the federal minimum wage, which just went up to $7.25 an hour. According to the Bureau of Labor Statistics (BLS), most cashiers make between $6.99 and $9.44 an hour. The highest paid cashiers can earn more than $14.50 an hour. See how much cashiers earn in your area .
What are the education requirements?
Most cashiers have a high school diploma or the GED equivalent. No higher education is required to be a cashier, but taking business classes or getting your associate's degree can help you if you eventually want to be a manager.
Career paths for cashiers
Cashier career paths can vary. If you've started out in a part-time position learning all you can about the business and practicing good customer service can lead to a full-time position. After that, hard work can lead to opportunities as a head cashier, or even as a manager.
The future of cashier jobs
According to the BLS, most cashier jobs are expected to see a decline in the next few years with the exception of gaming cashier jobs, which will increase. No need to worry though, there will be plenty of full-time and part-time cashier jobs still available because the BLS expects a good number of cashiers to leave their current jobs.
See all job descriptions |
What is the Meaning of Fair Trade?
The meaning of fair trade can vary according to which source you use. But the definition put forward by Oxfam and the Fairtrade Foundation is that it is an alternative approach to international trade. It is a partnership aimed at sustainable development for producers excluded from of or at a disadvantage in conventional trading channels. Those that help change these practices do this by raising awareness, campaigning, and promoting better trading options. There are other definitions of course. The basic meaning is making sure everyone in the chain from producer to consumer gets a fair deal as part of a product.
Typically, products move through many companies or individuals from the original producer to the final consumer. The original producer could be a family farmer located in a developing country. The final consumer might be someone living in North America or Western Europe. In between those two people, there can be several others taking a share of the profit. The original producer may sell to a local buyer. The local buyer may sell the product to a national buyer. The national buyer may sell the product to an exporter. The exporter ships the product to the final destination and sells it to a wholesaler. The wholesaler may sell the product to a distributor who then sells it to a store. The store is the one who sells it to the final consumer. One meaning of fair trade is removing much of the profiting middlemen from the equation.
In a fair trade situation, the number of middlemen is down significantly. The local producer is part of a cooperative which contracts with a fair trade buyer. The buyer may pay a part of the payment up front to help with costs. Once the buyer has the product, they will likely ship the products overseas to a fair trade distributor. The distributor puts the product out to stores where the final consumer purchases the product. The balance makes itself known at the consumer’s end. The very meaning of fair trade comes down to more of the profit going in the original producers pockets instead of all the middlemen. Everyone along the way makes a living wage without exploiting the original producer with non-living wages.
The meaning of fair trade is providing a living wage to producers. It means offering quality products at competitive prices to the consumer. It means making sure to cover the producer’s costs while also giving them sufficient profit to live on. |
This article needs additional citations for verification. (November 2008) (Learn how and when to remove this template message)
Tonawanda, New York
The North Tonawanda side of the Gateway Harbor
Location of Tonawanda in Erie County and New York
|• Mayor||Rick Davis (D)|
|• Common Council|
|• Total||4.1 sq mi (10.6 km2)|
|• Land||3.8 sq mi (9.8 km2)|
|• Water||0.3 sq mi (0.8 km2)|
|Elevation||571 ft (174 m)|
| • Estimate |
|• Density||3,700/sq mi (1,400/km2)|
|Time zone||UTC−5 (EST)|
|• Summer (DST)||UTC−4 (EDT)|
|GNIS feature ID||0979550|
Tonawanda (formally City of Tonawanda, from Tahnawá•teh meaning "confluent stream" in Tuscarora) is a city in Erie County, New York, United States. The population was 15,130 at the 2010 census. It is at the northern edge of Erie County, south across the Erie Canal (Tonawanda Creek) from North Tonawanda, east of Grand Island, and north of Buffalo. It is part of the Buffalo-Niagara Falls metropolitan area.
- 1 History
- 2 Geography
- 3 Culture
- 4 Demographics
- 5 Tonawanda in popular culture
- 6 Notable people
- 7 See also
- 8 References
- 9 External links
Post-Revolutionary War European-American settlement at Tonawanda began with Henry Anguish, who built a log home in 1808. He added to the hamlet in 1811 with a tavern, both on the south side of Tonawanda Creek where it empties into the Niagara River. The hamlet grew slowly until the opening of the Erie Canal, completed in the course of the creek in 1825. The Town of Tonawanda was incorporated in 1836. The Erie Canal and the railroads that soon followed it provided economic opportunity. By the end of the 19th century, both sides of the canal were devoted to businesses as part of a leading lumber processing center. In the mid-19th century, the business center of Tonawanda was incorporated as a village within the town. The village united in a corporation with North Tonawanda across the canal. This corporation fell apart, and in 1904 the village was incorporated as the City of Tonawanda.
On September 26, 1898, a tornado struck the City of Tonawanda. After crossing over the river from Grand Island, the tornado damaged the old Murray School as well as several homes along Franklin and Kohler streets. Its worst havoc was wreaked along Fuller Avenue, where a dozen homes were severely damaged, several being leveled to the ground. No one was killed by the fierce storm, but there were numerous injuries.
From the mid-19th century to the early 20th century a section of Tonawanda was known as Goose Island. Goose Island was a manmade island in the Niagara River formed by the Erie Canal. Goose Island was a triangular piece of land bordered on one side by the Niagara River, on the second side by the Tonawanda Creek, and on the third by the Erie Canal. It was then famous with seamen the world over as the terminus of the Erie Canal and for the Goose Island girls. The Goose Island Section of Tonawanda had many cheap boarding houses, cheap hotels, bars, and houses of ill repute. Canalers often wintered over on Goose Island. Goose Island was known as a bad section of Tonawanda, with drunkenness, brawling, and bawdy displays being commonplace. The gentrification of Goose Island began with the decline of the lumbering port business in Tonawanda and the building of a boxboard mill there on the island. Then the canal was motorized, eliminating mules and many canal men. Next the section of the canal from Tonawanda to Buffalo was abandoned in 1918. That section of the canal was filled in and Goose Island was no longer an island. The establishments in the Goose Island section of Tonawanda came under community pressure in the 1920s and 1930s and were closed, with more of the land there being given over to the boxboard mill. In the 1970s the boxboard mill closed and was razed along with many remaining Goose Island structures. Goose Island street names Tonawanda, First, Clay and Chestnut disappeared. At the turn of the millennium waterfront dwellings were built along the Niagara River, completing the gentrification of this area.
Spaulding Fibre became a manufacturer of leatherboard (made from leather scraps and wood pulp), transformer board, vulcanized fibre, bakelite (under the trade name Spauldite) and Filawound (fiberglass) tube. Operating in Tonawanda from 1911 to 1992, it became the major employer in the city. The company was founded in 1873 with a leatherboard mill by Jonas Spaulding and his brother Waldo in Townsend Harbor, Massachusetts. They did business as The Spaulding Brothers Company. Jonas Spaulding had three sons: Leon C., Huntley N. and Rolland H..
With industry expanding, Jonas established leatherboard mills at Milton and North Rochester, New Hampshire, in part to allow his sons to join him in the business. The New Hampshire mills operated under the name J. Spaulding and Sons. After Jonas Spaulding's death in 1900, his sons (by then living in New Hampshire, where they had corporate headquarters at Rochester) continued to operate these mills successfully. They brought the Townsend Harbor mill under the J. Spaulding and Sons banner in 1902.
With continued success, the three Spaulding brothers added a vulcanized fibre operation in Tonawanda, New York in 1911. They added a fourth leatherboard mill in Milton (second in this community) in 1913. The mayor of Tonawanda, Charles Zuckmaier, had solicited the Spaulding brothers’ business in Tonawanda. A ground-breaking ceremony was held on July 17, 1911, for the new plant, a $600,000 investment by J. Spaulding and Sons. Operations began on April 1, 1912, with 40 employees. The daily capacity of the plant at the time was five tons of fibre sheeting and one ton of fibre tubing.
Around 1924, the sons changed the name of the company to the Spaulding Fibre Company. In the 1930s, they added a second product at the Tonawanda plant: Spauldite, a "me too" phenol formaldehyde resin material made to compete with Bakelite. The trademark now owned by Spaulding Composites can be applied to laminates made with other natural or synthetic resins as well.
After Huntley Spaulding, the last of the three brothers, died in November 1955, the Spaulding Fibre Company became part of a charitable trust previously set up by Huntley and his only sister, Marion S. Potter. The trust was created to disburse their remaining wealth within 15 years of the death of the last sibling. Marion S. Potter died on September 27, 1957.
The company in Tonawanda flourished under foremen, superintendents and workers from the local blue collar workforce. It also attracted new residents who came for the jobs. One was Richard Spencer, who left the oil fields of Bradford, Pennsylvania, to be a superintendent for two decades. He managed through several labor strikes and periods of economic unrest for the company.
In 1956 the Tonawanda plant completed an expansion that doubled the paper mill and the vulcanized fibre-making capacity of the plant. In addition, after the death of Huntley Spaulding, corporate offices relocated to Wheeler Street from Rochester, New Hampshire. In the 1960s, the Tonawanda plant added a third product line, Filawound (fiberglass) tubing.
The 50th anniversary of the Wheeler Street Plant in 1961 was marked by a special 22-page section in the Tonawanda News. The Wheeler Street Plant reportedly covered 610,000 square feet (57,000 m2), employed 1500 workers, and had an annual payroll of $9,000,000. The company paid $153,818 in city taxes that year and was Tonawanda's largest tax payer. The plant was nearing its peak, but there was more expansion to come.
In 1966, the charitable trust sold the Spaulding Fibre Company to Monogram Industries. The Tonawanda plant began a slow decline during a period of industrial restructuring and product and manufacturing changes. In 1984, Monogram Industries sold the Spaulding Fibre Company to Nortek. In 1988, Nortek changed the company name to Spaulding Composites. Spaulding Composites closed the Tonawanda plant on August 24, 1992.
By the time the plant closed, employment had declined to 300. Since the closure of the Tonawanda plant, Spaulding Composites twice filed for bankruptcy. The plant site had a footprint of 860,000 square feet (80,000 m2). It fell into disrepair and, because of the wastes of the industrial processes, was classified as a brown field site under environmental regulations.
In 2006, the Erie County Development Agency contracted for demolition of the derelict facilities. It was punctuated by the felling of the 250-foot (76 m)-tall smoke stack that dominated the site. (This event is documented with a handful of videos on YouTube.) Cleanup of the site was declared complete in August 2010.
|1||Kibler High School||284 Main St.||Added to the National Register of Historic Places, January 15, 1999|
|2||Tonawanda (25th Separate Company) Armory||79 Delaware Ave.||Added to the National Register of Historic Places, January 28, 1994|
|3||US Post Office-Tonawanda||96 Seymour St.||Added to the National Register of Historic Places, May 11, 1989|
Tonawanda is at (43.01119, -78.877399).
According to the United States Census Bureau, the city has an area of 4.1 square miles (10.6 km²), of which, 3.8 square miles (9.8 km²) of it is land and 0.3 square miles (0.8 km²) of it (7.34%) is water.
Adjacent cities and towns
Neighborhoods and locations in the City of Tonawanda
- Gastown – A neighborhood in the northeast corner of Tonawanda, bordering the Erie Canal. Its name comes from the Gas Light Co., which was built on Long's Point, home of the historical Long's Homestead.
- "The Hill" (aka "Riverview") – A region centered around Tonawanda High School, so named because of its slightly elevated topography when compared with the rest of the relatively flat city. It is also known as Clay Hill as it was formed by a terminal glacial moraine that deposited the clay that forms the hill. The area near the high school was the site of popular clay tennis courts.
- Millstream – A neighborhood on the city's eastern side. It is named for a stream that flowed through the area, but has since been mostly channelled underground.
- Ives – a local skatepark, ice hockey rink, soccer field, and tennis court in the middle of Tonawanda. Starting out as a small blue kiddy pool, it was remodelled to become a skatepark and other things.
The City of Tonawanda is called by many of its residents the "C.O.T.", meaning the "City" rather than "Town" of Tonawanda.
Major highways in the City of Tonawanda
- New York State Route 265 (Main St., Seymour St., River Rd.) North-South Roadway from the Tonawanda town line (south) north through the city and over the Erie Canal/Tonawanda Creek into North Tonawanda.
- New York State Route 266 (Niagara St.), East-West Roadway from in the city that parallels the Niagara River from the Tonawanda town line (west) through the city to its east end at Seymour St./River Rd. (NY 265) intersection in the city.
- New York State Route 384 (Delaware St.), North-South Road from the Tonawanda town line at the south, north through the city and to North Tonawanda by the way of Main St. across the Canal.
- New York State Route 425 (Twin Cities Memorial Highway.) North-South Highway through the east part of town from its south end at Interstate 290 north to North Tonawanda once it crosses over the Canal. (This is a major transportation route for traffic to-and-from North Tonawanda and beyond).
In conjunction with the City of North Tonawanda, the City of Tonawanda celebrates an annual Canal Festival. For one week, members of both communities celebrate Tonawanda's historic location on the western end of the Erie Canal in the largest festival of its kind. The Festival began in 1983 when Freemasons in the area, in conjunction with several state and regional leaders, set out to promote the businesses of the Tonawandas, provide fund raising opportunities for local non-profit organizations, and provide recreational activities for the citizens of both Tonawanda and North Tonawanda.
The first Canal Fest was held on both sides of the canal in 1983. Today, the Canal fest is organized by the Canal Fest of the Tonawandas Inc., a non-profit organization. It is estimated over 150,000 people attend the Canal Fest each year, though an accurate number is impossible to obtain since attending the event is free of charge and there are no turnstiles to measure crowds. The Canal Fest is the largest event held along the Erie Canal today and is in the top percentile of New York State events.
Also in conjunction with the city of North Tonawanda, Tonawanda is home to Gateway Harbor, a public park that runs along the Erie Canal just before it joins the Niagara River. During the summer, local boaters are free to dock at the park, and the area becomes popular during the free concerts set up by the local chamber of commerce. Various local businesses sponsor a series of concerts on both the Tonawanda and North Tonawanda sides of the park.
The Historical Society of the Tonawandas operates a museum in the former New York Central & Hudson Valley Railroad station, which has exhibits depicting the area's lumber industry and Erie Canal history. The Long Homestead is a restored Pennsylvania German-style house built in 1829 and containing period furniture from the early 19th century (the Historical Society of the Tonawandas provides guided tours). Isle View Park, on the Niagara River overlooking Grand Island, is available for biking, hiking, rollerblading, fishing and launching boats. The Riverwalk trail passes through the park, and a pedestrian foot bridge connects the park to Niawanda Park.
|U.S. Decennial Census|
At the 2000 census, there were 16,136 people, 6,741 households, and 4,361 families residing in the city. The population density was 4,252.9 people per square mile (1,643.8/km²). There were 7,119 housing units at an average density of 1,876.3 per square mile (725.2/km²). The racial makeup of the city was 98.08% White, 0.42% Black or African American, 0.46% Native American, 0.39% Asian, 0.01% Pacific Islander, 0.17% from other races, and 0.46% from two or more races. Hispanic or Latino of any race were 0.89% of the population.
There were 6,741 households of which 28.9% had children under the age of 18 living with them, 49.9% were married couples living together, 10.8% had a female householder with no husband present, and 35.3% were non-families. 31.2% of all households were made up of individuals and 13.1% had someone living alone who was 65 years of age or older. The average household size was 2.39 and the average family size was 3.01.
23.9% of the population were under the age of 18, 7.6% from 18 to 24, 29.0% from 25 to 44, 22.7% from 45 to 64, and 16.8% who were 65 years of age or older. The median age was 39 years. For every 100 females, there were 94.4 males. For every 100 females age 18 and over, there were 90.2 males.
The median household income was $45,721,
Tonawanda in popular culture
Tonawanda is the home of a real inventor Phillip Louis (Phil) Perew who is fictionalized in the alternate history world created by artist and author couple Paul Guinan and Anina Bennett. In this history, created for the graphic novels Boilerplate and Femopolis, Perew creates an electromechanical man, called the 'Automatic Man', in the late 19th century. (At the time of writing, in February 2007, Femopolis has not been published.)
In the HBO miniseries, Band of Brothers, Easy Company soldier Warren Muck states he is from Tonawanda and he swam across the Niagara River. "Skip" Muck died in the Battle of Bastogne and is on the City of Tonawanda memorial to soldiers killed in World War II.
In the 1999 film, Saving Private Ryan, Private James Ryan is rescued by Tom Hanks' character. The Ryan character was based upon Sgt. Fritz (Frederick) Niland. Niland lost two brothers, Robert and Preston in the Normandy Landings. Edward Niland (a third brother) was listed as killed in action in the Pacific, but was found in a Japanese POW camp at the end of the war. Fritz and Skip Muck were best friends and enlisted in the 101st together in 1942.
In Mark Twain's The Diary of Adam and Eve (circa 1904), and popularized by the musical The Apple Tree, Tonawanda is identified as the site Adam and Eve move to after they are removed from the Garden of Eden (which is identified as "Niagara Falls Park").
- Ockie Anderson, former NFL player
- Fred Brumm, NFL player
- John T. Bush, former New York State Senator
- Rick Cassata, retired CFL quarterback who attended Tonawanda High School
- Glen Cook, retired Texas Rangers pitcher, attended Tonawanda High School, Graduate of Ithaca College
- Jane Corwin, New York State Assemblywoman
- Dave Geisel, retired MLB player who attended Tonawanda High School
- Gregory John Hartmayer, Bishop of Savannah
- Kevin Hardwick, Erie County legislator
- Frank Hinkey, member of College Football Hall of Fame
- Chris Lee, former U.S. Congressman
- Bert Lewis, former MLB pitcher
- Richard Matt, convicted felon, prison escapee
- Sam Melville, bombing conspirator
- Joe Mesi, retired boxer
- Blake Miller, former football head coach of Central Michigan Chippewas
- Warren H. Muck, Member of famed Easy Company 506th, 101st.
- John Neumann, first American bishop to be canonized
- Niland brothers, notable World War II soldiers
- Marc Panepinto, New York State Senator
- Phillip Louis (Phil) Perew, Lake boat captain, inventor, sporting promoter, landlord of notorious establishments on Goose Island in Tonawanda
- Thomas Perry, author
- Bobby Shuttleworth, MLS goalkeeper
- John Simson Woolson, former Federal judge
- Jules Yakapovich, longtime Kenmore West High School football coach
- Metropolitan & Central City Population: 2000-2005. Demographia.com, accessed September 3, 2006.
- "Population and Housing Unit Estimates". Retrieved June 9, 2017.
- "Tonowanda Zipcode Map". Retrieved 2017-02-19.
- Rudes, B. Tuscarora English Dictionary Toronto: University of Toronto Press, 1999
- "Tonawanda News"; July 20, 1959; p 13.
- Dave Hill, Tonawanda News, "SPAULDING FIBRE: From prosperity to decline", January 16, 2008 12:26 am
- Eugene C. Struckhoff, "The orange tree and the inchworm: an abbreviated history of the Spaulding-Potter Charitable Trusts", Concord, NH; 1973
- ECIDA - Erie County Industrial Development Agency: News Archived 2009-08-23 at the Wayback Machine.
- ECIDA - Erie County Industrial Development Agency: News Archived 2009-08-23 at the Wayback Machine.
- Harold Mcneil; "Demolition marks end of era: Smokestack at old Spaulding Fibre plant comes a-tumbling down", The Buffalo News, 22 Dec 2006
- Barbara O'Brien; "Spaulding Fibre project complete", The Buffalo News, 26 August 2010
- National Park Service (2009-03-13). "National Register Information System". National Register of Historic Places. National Park Service.
- "US Gazetteer files: 2010, 2000, and 1990". United States Census Bureau. 2011-02-12. Retrieved 2011-04-23.
- Smyczynski, Christine A. (2005), "Western New York: From Niagara Falls and Southern Ontario to the Western Edge of the Finger Lakes". pp 104-108. The Countryman Press: Woodstock, Vermont.
- "Census of Population and Housing". Census.gov. Retrieved June 4, 2015.
- "American FactFinder". United States Census Bureau. Retrieved 2008-01-31.
- "Louis Perew's Automaton". Retrieved 2006-06-01. |
How does your organization organize documents such as standards, policies, procedures, etc.?
The UCF team wants your input on how the following organizational documents should be organized in a hierarchy.
To help us understand how better to organize Common Controls that pertain to these documents within our Common Control Hierarchy.
This is the proposed hierarchy of organizational documents.
What contains what?
Organizations create high-level and operational documents to help them operate in a consistent manner to achieve business objectives and goals. Below is a suggestion of how an organization should develop and organization these high-level and operations documents.
- Strategy is on the same level as framework
- Frameworks contain guidelines/standards, policies, measure, and programs.
- Guidelines/Standards contain requirements and specifications.
- Policies are standalone, in that they contain nothing.
- Measures contain methodologies, techniques, systems, and processes.
- Programs contain plans and procedures.
How this Relates to the Common Control Hierarchy
The following table and image describe the parent/child relationships of Common Controls that mandate the establishing and maintaining of organizational documents within the UCF Common Control Hierarchy.
UCF Common Control Hierarchy Visual Representation
So we are all on the same page, here is a list of the documents mentioned and their definitions.
framework: The overall documented structure and template that the organization can use to create and maintain an organizational effort. (It defines the scope, objectives, activities, and structure)
guideline: A documented recommendation of how an organization implement something. (Inspiration for Programs, policies, etc.)
measure: A plan or course of action taken to achieve a particular purpose.
methodology: A particular way of performing an operation designed to produce precise deliverables at the end of each stage.
plan: A step-by-step outline of the processes and procedures to be performed to complete or implement something.
policy: An official expression of principles that direct an organization's operations.
procedure: A detailed description of the steps necessary to implement or perform something in conformance with applicable standards. A procedure is written to ensure something is implemented or performed in the same manner in order to obtain the same results.
process: A particular series of actions or steps to bring about a certain outcome; series of procedures.
program: 1. A structured grouping of interdependent projects that includes the full scope of business, process, people, technology, and organizational activities that are required (both necessary and sufficient) to achieve a clearly specified business outcome.
2. A documented listing of procedures, schedules, roles and responsibilities, and plans to be performed to implement an organizational effort.
requirement: A condition or capability that must be met
specification: A defined set of requirements.
standard: A formalized guideline, directive, or specification whose compliance is mandatory, and whose implementation is deemed achievable, measurable, and auditable for compliance
strategy: A plan of action designed to achieve a long-term or overall aim.
system: A collection of techniques, processes, and technologies implemented while following the documented programs.
technique: The use of a specific technology or procedure to achieve a business outcome in alignment with the organization's methodologies.
Please sign in to leave a comment. |
As Mayor Marty Walsh of Boston hosts the annual meeting of the United States Conference of Mayors, many of the mayors bemoan federal policies that undermine local efforts to address climate change.
The mayors are to be commended for the collaboration they have just announced to buy more renewable power, but they may not know that many of them — including all in Massachusetts — already have a stunningly effective tool to accelerate the development of renewable power like solar and wind and to reduce greenhouse gas emissions. The tool is a readily available mechanism for purchasing even larger amounts of renewable power. That tool is municipal aggregation, known in some communities as community-choice aggregation or community-choice energy.
Utilities like Eversource and National Grid deliver electricity, but they don’t generate it. Instead, they buy electricity for most residential and small business customers, who are known as “Basic Service” customers.
In Massachusetts, a state law authorizes municipal aggregation for cities and towns that choose it. The law gives them the right to buy electricity on behalf of the utility’s Basic Service customers. A municipal aggregation program has several advantages: It often (though not always) saves customers money; it’s a trustworthy program, vetted by the city or town; and it can offer more stable electricity prices than the electric utilities provide. And aggregation can provide communities with more renewable power than they would otherwise get from their utility.
A different state law imposes so-called renewable portfolio standards, which require Massachusetts utilities to deliver a portion of the electricity they purchase for customers from renewable sources. This year, the portion is 13 percent, and it increases 1 percent annually (with bills pending in the Legislature to increase that rate).
For several years, Massachusetts cities and towns that adopted aggregation programs — about 100 of them — did so only to try to save money. But many other cities and towns have realized that they could use the likely savings from aggregation to purchase more renewable power than utilities are required to provide. In the last few years, about 40 of the state’s communities have adopted or begun the process of adopting aggregation programs that buy more renewable power than the renewable portfolio standards require. Most of them buy 5 percent more, although Brookline’s aggregation program specifies 25 percent. Newton expects to include a sizeable renewable component in its program.
In Massachusetts, municipal aggregation is an “opt-out” program. Electric customers are informed of the program in a variety of ways, including by letter from their city or town. Customers who take no action are automatic participants in the aggregation at the “standard” level (e.g., the 5 percent, or 25 percent in Brookline). But they can choose at any time, with no fee, to opt out and remain utility Basic Service customers.
The opt-out feature of municipal aggregation, together with the renewable power component, gives the program its power to lower greenhouse gas emissions. Most electric customers do not opt out, in part because the additional renewable power that cities and towns have chosen thus far is unlikely to increase electric bills more than a few dollars per month on average, if that. In fact, the experience of several Massachusetts communities is that, at least at certain times, the cost of electricity in an aggregation is lower than the Basic Service price.
It’s hard to over-emphasize the importance of a program that drives the development of renewable electricity. Converting to cleaner electricity is the path to decarbonizing the economy.
The mayors assembling in Boston have a real opportunity. In states that do not have municipal aggregation laws, they can advocate for their adoption. In states that already have statutory authority, they can adopt municipal aggregation programs with a renewable component. The mayors have power that Washington can’t touch.Ann Berwick is director of sustainability for the city of Newton. She was chair of the Massachusetts Department of Public Utilities and undersecretary for energy under Governor Deval Patrick. |
Other Thermoforming Materials
Acrylic has very good surface cosmetics, great clarity, and is UV resistant. Surface defects and scratches can be repaired easily. Acrylic must be modified to be thermoformable.
The term TPO can mean many things; generally it refers to a mixture of polypropylene, talc and rubber, in varying proportions. TPO is difficult to process until the thermoformer adds tight process controls. TPO is generally flexible, very impact resistant, and has a soft surface. TPO has a dull surface unless a paint-replacement grade is specified. TPO can be tailored to fit many applications by varying the mixture, but currently has the most significant use in automobiles.
PVC has good electrical insulating properties and is used in wire insulation products. It has very good chemical resistance and can have good cosmetic properties. PVC is inherently fire retardant, with some locales prohibiting its use in buildings, as the gases emitted when it burns can be poisonous. |
Lignocellulosic Bioethanol Production
One of the hot topics today in Europe and in the wider-world is the potential of ligno-cellulosic bioethanol, i.e. using lignin and cellulosic rich feedstocks such as trees, energy crops such as fast growing grasses like miscanthus. This means a far greater source of biomass can be used for bioethanol production in more areas of the world than for sugar or cereal ethanol feedstock crops and importantly the threat of biofuels competing for land with food crops is negated completely. Sustainably managed forests, crop residues and energy crops can provide a substantial source for lignocellulosic bioethanol.
The complexity of this so called “Second Generation biofuel” is greater than traditional feedstocks and at the moment it is more expensive due to the added processing and more costly enzymes involved in production. However, it is far more favorably regarded by politicians, especially in the EU, as a CO2 friendly alternative to fossil transport fuels than corn, wheat or sugar beet bioethanol.
The main processes: Biomass Feedstock, Pre-treatment, Hydrolysis, Fermentation to Ethanol – are shown in the two diagrams below. You can also download the diagram as one by clicking here.
Image Source: US Department of Energy – Genome Management Information System |
ONTARIO, Calif.—East of LA, a natural gas peaker plant surrounded by fields of cows got a new, futuristic neighbor. Under a maze of transmission lines, a 20MW battery storage facility made of nearly 400 closet-sized batteries sitting on concrete pads now supplies 80MWh to utilities.The project is an anomaly not just because it’s one of the largest energy storage facilities on the grid in California today, but also because it was built in record time—the project was just announced in September when regulators ordered utility Southern California Edison to invest in utility-scale battery storage, a year after a natural gas well in Aliso Canyon, California, sprang a leak and released 1.6 million pounds of methane into the atmosphere. The leak prompted a shutdown of the natural gas storage facility, one of the largest west of the Mississippi. Regulators were concerned that such a shutdown would cause energy and gas shortages, although that worry has not come to fruition entirely, and SoCal Gas has begun tentatively withdrawing gas again in recent weeks.
The ability to store electricity is something that appeals to state regulators because it also moves toward helping intermittent renewable energy—like wind, which only is produced when wind blows, or solar, which only is produced when the sun shines—become baseload energy. If you can store it after it’s produced, then you can call upon that energy to feed the grid at any moment, even when wind and sun are absent.
The Tesla battery facility is situated on 1.5 acres, and it’s modular in design—two 10 MW collections of 198 industrial-grade Tesla Powerpacks and 24 inverters are connected to two separate circuits at the Mira Loma substation. Unlike its neighboring Mira Loma natural gas peaker plant, which operates to make up for over- or undersupply of energy on the grid, the new battery facility operates only when there’s immediate demand. Southern California Edison’s market operations group submits bids for the energy at the battery plant and the California Independent Systems Operator (CAISO), a nonprofit that oversees the state’s electric system, will award the bid if a customer needs that battery power.
On Monday, a variety of utility officials, local politicians, and Tesla employees gathered for a ribbon-cutting ceremony for this storage facility, which is already operational. J.B. Straubel, Tesla’s chief technical officer, noted that these Powerpacks were all manufactured at Tesla’s Gigafactory outside of Reno, Nevada. The industrial Powerpacks are essentially larger versions of Tesla’s home-storage Powerwall unit, and Straubel noted that although the chemistry of the lithium-ion batteries is slightly different from the chemistry of Tesla vehicle batteries, the company used much of what it learned from building Tesla car batteries to inform the designs of the stationary storage units. The batteries are second-generation stationary storage units from Tesla, which are doubled in energy density from the unit that Tesla announced in May 2015.
Southern California Edison officials said that the site in Ontario was chosen from 70 potential sites, which they narrowed down based on land availability, means of interconnection with transmission lines, and ability to construct the site quickly. Kevin Payne, the CEO of Southern California Edison, told journalists on a tour of the facility that the speed of construction is unlikely to be repeated, given that other sites may not have the ideal characteristics that this site had. “These projects are not likely to always be a three-month turnaround. There was a special urgency with one... but if you look around you, this was just dirt a few months ago…so this one I think shows what can be done with all the right urgency and stars aligning,” Payne said.
Payne also noted that storage facilities were getting more and more sophisticated. He cited an earlier demonstration project called the Tehachapi Energy Storage Project, which went online in 2014 and at the time was the largest energy storage facility in North America. But Tehachapi "only provided 40 percent of the storage you see here," Payne told the audience on Monday.
California has committed to cutting its greenhouse gas emissions to 40 percent of 1990 levels by 2030, meaning the state is looking to add more renewable energy to the grid. But Michael Picker, commissioner of the California Public Utilities Commission (CPUC) noted on Monday that part of reaching that goal is going to mean electrifying the transportation sector, which will create extra demand that utilities will need to meet. Currently only 20 percent of California’s greenhouse gas emissions come from utilities, but 40 percent comes from transportation. Tesla, of course, is working on that end of the emissions-creating spectrum, too, with its electric vehicles.
Two other energy storage facilities are being built in California currently. San Diego Gas & Electric is building a system with AES Energy Storage, and AltaGas is building a system with Greensmith Energy Partners, according to the Los Angeles Times. In total, the three projects will add 77.5 MW of storage capacity to the grid.
Listing image by Megan Geuss |
Date of this Version
Small businesses are an important part of the United States economy. In 1996, there were about 5.5 million small businesses in the United States employing between zero and five hundred workers, about 99% of all non-farm U.S. businesses. Businesses with five hundred or fewer employees employ 53% of the private non-farm work force in the United States, account for 47% of all sales, and are responsible for 51% of the private gross domestic product. During the period 1992 to 1996, small firms with fewer than five hundred employees also created virtually all of the net new jobs in the U.S.
These small businesses often find it difficult to raise capital. An obvious first source of capital is the entrepreneur's personal wealth, but most entrepreneurs have insufficient personal funds to finance a business. Loans are another possible source of financing, but "because small startup businesses have little or no past record of performance, loans are virtually impossible to obtain." Small businesses may also have great difficulty obtaining money, particularly seed capital, from venture capital funds. Many small business owners thus turn to friends and family, and, if they provide insufficient funds, to the general public.
When small businesses turn to public investors-often even when they resort only to friends and family-those small businesses, whether they realize it or not, encounter securities laws. The public sale of securities in the United States is heavily regulated. Under the Securities Act of 1933, issuers making public offerings must file a registration statement with the Securities and Exchange Commission (SEC) and comply with the Act's prospectus delivery requirements and restrictions on communications. In addition, the issuer may have to undergo a similar, sometimes even more rigorous, registration process in the various states in which the offering is made.
Federal and state regulation of securities offerings poses problems for small businesses. Many small business issuers and their advisers are totally unaware that they must comply with federal or state securities law. Small business issuers often sell securities without consulting an attorney. If they do consult an attorney, it is often an attorney unfamiliar with federal securities law and "the many exemptions comprehensible only to the lawyer familiar with the Alice-in-Wonderland quality of securities law." Small business promoters often mistakenly believe that federal and state securities laws apply only to large corporations whose securities are listed on a national exchange. Or, they may be vaguely aware of the Securities Act exemption for "transactions by an issuer not involving any public offering" and think they are safe as long as they confine their offering to friends and family. But the availability of the private offering exemption turns on the investors' sophistication and access to information about the business-in short, the ability of offerees to fend for themselves. The private offering exemption is not available when promoters sell to "a diverse group of uninformed friends, neighbors and associates," or even to existing investors in the company. As a result, "[w]ith monotonous frequency," securities lawyers are faced with small business clients who have already sold securities with no concern for the application of the Securities Act. |
Enterprise resource planning (ERP) refers to a computer information system that integrates all the business activities and processes throughout an entire organization. ERP systems incorporate many of the features available in other types of manufacturing programs, such as project management, supplier management, product data management, and scheduling. The objective of ERP is to provide seamless, real-time information to all employees throughout the enterprise. Companies commonly use ERP systems to communicate the progress of orders and projects throughout the supply chain, and to track the costs and availability of value-added services.
ERP systems offer companies the potential to streamline operations, eliminate overlap and bottle-necks, and save money and resources. But ERP systems are very expensive and time-consuming to implement, and surveys have shown that not all companies achieve the desired benefits. According to the online business resource Darwin Executive Guides, it is "a tall order, building a single software program that serves the needs of people in finance as well as it does the people in human resources and the warehouse… To do ERP right, the ways you do business will need to change and the ways people do their jobs will need to change too. And that kind of change doesn't come without pain."
ERP is a part of an evolutionary process that began with material requirements planning (MRP). MRP is a computer-based, time-phased system for planning and controlling the production and inventory function of a firm-from the purchase of materials to the shipment of finished goods. It begins with the aggregation of demand for finished goods from a number of sources (orders, forecasts, and safety stock). This results in a master production schedule (MPS) for finished goods. Using this MPS and a bill-of-material (a listing for all component parts that make up the finished goods), the MRP logic determines the gross requirements for all component parts and subassemblies. From an inventory status file, the MRP logic deducts the on-hand inventory balance and all open orders to yield the net requirements for all parts. Then all requirements are offset by their lead times to provide a date by which an order must be released in order to avoid delaying the production of finished goods.
From this MRP logic evolved manufacturing resource planning (MRP II). Before MRP II, many firms maintained a separate computer system within each functional department, which led to the overlap in storage of much of the firm's information in several different databases. In some cases, the firm did not even know how many different databases held certain information, making it difficult, if not impossible, to update it. This could also cause confusion throughout the firm if different units (such as engineering, production, sales, and accounting) held different values for the same variables. MRP II expands the role of MRP by linking together such functions as business planning, sales and operations planning, capacity requirements planning, and all related support functions. The output from these MRP II functions can be integrated into financial reports, such as the business plan, purchase-commitment report, shipping budget, and inventory projections. MRP II is capable of addressing operational planning in units or financial planning in dollars, and has a simulation capacity that allows its users to analyze the potential consequences of alternative decisions.
The next step in the evolutionary process was enterprise resource planning (ERP), a term coined by the Gartner Group of Stamford, Connecticut. ERP extends the concept of the shared database to all functions within the firm. By entering information only once at the source and making it available to all employees, ERP enables each function to interact with one centralized database and server. Not only does this eliminate the need for different departments within the firm to reenter the same information over and over again into separate computer systems, but it also eliminates the incompatibility that was created by past practice.
ERP is a hybrid of many different types of software, incorporating many of the features available in other programs. ERP provides a way to keep track of materials, inventory, human resources, billing, and purchase orders. It is also useful for managing various types of orders, from mass-customized orders where daily or weekly shifts occur within the plant or multiple plants, to products that are made-to-stock, made-to-order, or assembled-to-order.
Higher-level ERPs employ design engineering and engineering change control modules. These modules facilitate the development of new product-engineering information and provide for modification of existing bills of material, allowing engineers to support working models of items and bills of material prior to their production releases.
It is important to understand that ERPs are not cheap to implement and operate, nor can they be implemented overnight. Owens-Corning spent more than $100 million over the course of two years installing one of the most popular ERP systems, SAP AG's R/3 system. Microsoft spent $25 million over 10 months installing R/3. Chevron also spent $100 million on installation. Apparently, however, the benefits of ERP implementation and use can be enormous. Microsoft used it's ERP system to replace 33 different financial tracking systems used in 26 of its subsidiaries, with an expected savings of $18 million annually. In the same respect, Chevron expected to recoup its $100 million investment within two years.
Owens-Corning's aim was to offer buyers one-stop shopping for insulation, pipes, and roofing material. Use of the R/3 facilitated this goal by allowing sales representatives to quickly see what products were available at any plant or warehouse. Analog Devices use the R/3 to consolidate the products stored at its warehouse, thereby creating an international order-processing system that can calculate exchange rates automatically. ERP and supply chain management.
When ERP systems first appeared, they acted as the connection between front-office operations (e.g., sales and forecasting) and the day-to-day functions of manufacturing. As ERP technology has advanced, the systems have increasingly incorporated logistics and warehousing capabilities, further connecting them with the supply chain. Some ERP systems offer Internet functionality, which can provide real-time connectivity from suppliers to the end customer.
The result of ERP use is more than an automation of existing processes-it is a significantly new way of doing business that enables a firm to respond to market changes more rapidly and efficiently. This can apply to service firms as well as manufacturers. Many ERP packages also let the user track and cost service products in the same way they compute the cost of making, storing, and shipping physical products.
R. Anthony Inman
Revised by Laurie Hillstrom
"Enterprise Resource Planning." Darwin Executive Guides Available from < http://guide.darwinmag.com/technology/enterprise/erp >.
Hanson, J.J. "Successful ERP Implementations Go Far Beyond Software." San Diego Business Journal (5 July 2004).
Larson, Melissa. "Meet Customer Demands with New ERP Systems." Quality (February 1998): 80–81.
Millman, Gregory J. "What Did You Get from ERP and What Can You Get?" Financial Executive (May 2004).
O'Leary, Daniel F. ERP: Systems, Life Cycle, E-Commerce, and Risk. Cambridge University Press, 2000.
Olinger, Charles. "The Issues Behind ERP Acceptance and Implementation." APICS: The Performance Advantage (June 1998): 44–48.
Wallace, Thomas F., and Michael H. Kremzar. ERP: Making It Happen-The Implementer's Guide to Success with ERP. New York: John Wiley, 2001. |
Author: Marcello Pompa – Industrial Engineering – University “Campus Bio-Medico” of Rome
Energy systems are changing fast. The methods to produce energy and the ways to transmit it are changing. The consumption of electrical energy is growing and its generation is becoming more decentralized, with grid management increasingly complex.
With the objective to overcome the weaknesses of conventional electrical grids, the Smart Grid was introduced. A Smart Grid is an electricity network based on two-way digital communication. This system allows for analysis, monitoring, communication and control with the aim to improve efficiency and reduce energy consumption and cost.
The Smart Grid has the opportunity to move the energy industry into a future more reliability, efficiency, and availability, allowing an improve of environmental health. During this period, it will be critical to carry out technology improvements, study, consumer education and standard regulations to ensure the benefits of the Smart Grid. The advantages of the Smart Grids are:
- Slower time of restoration of electricity after power disturbances;
- Improve the transmission efficiency;
- Reduce costs;
- Increased integration of large-scale system based on renewable energy;
- Improved security
- useful to use the plug-in hybrid technology for electric vehicles.
In the following, a review based on smart grid, with example of installation and future development, are reported. |
The competitive nature of the plastics industry together with the invention of new and innovative materials and techniques can make success difficult. One mistake can result in an economic loss from which a company will have difficulty recovering. It makes it essential that companies utilize on their technology, instruments and skills to ensure the material they employ performs to expectations under the selected processing method. To help achieve this, many industries apply capillary rheology. While melt flow testing is another option, using a capillary rheometer can prove to provide more information on the behavior and various properties of a material such as a polymer and various plastics.
A capillary rheometer is an instrument companies employ to measure the changes in a material’s viscosity as they are relative to the shear rates. If the rheometer is controlled-stress and high-shear, its parts will consist of:
- A heated barrel – single or double bore are the two basic options. Double bores come into play if the technician wishes to conduct two tests under diverse conditions simultaneously. If a twin bore is combined with a “zero length die” this will allow the technician to determine both shear and extensional viscosity concurrently
- A piston
- A calibrated die – it is changed when the company requires the die to determine the rheological properties of the material under different conditions
In this manner, the rheometer can measure not only the load, but also the piston speed and the die geometry. In addition, technicians can calculate the shear viscosity by knowing these three critical factors:
- Die dimensions
- Piston speed
Technicians employ the capillary rheometer in various types of material processes. These include extrusion and injection molding where the rheometers track the flow of the plastic or polymer through the defined space to achieve a measure of true or absolute viscosity, something not achieved by torque rheometers.
Why Employ Capillary Rheology?
Several reasons lie behind the use of a capillary rheometer. By adopting this method, a technician can supply the company with valuable information including:
- Determination of the optimal working strictures for various processing methods including blow molding, extrusion and injection molding
- Examination of various processing concerns swifter and with less disruption
- Discovering which specific materials are the most suitable for long flow lengths or complex components
- Replication of the manufacturing strictures for various reasons including design, product or numerical simulations and troubleshooting
While other reasons exist, including reducing the instances of lost time, wasted material and economic efficiency, capillary rheology is first and foremost about measuring true or absolute viscosity. |
A Jack of All Trades: The Importance of Being Well
-Rounded in the WorkplaceA JACK OF ALL TRADES:
The Importance of Being Well-Rounded in the Workplace
Submitted to Robert P. Campbell by
Deron R. Dantzler
in partial fulfillment of CARD410.
June 19, 2005
There are literally hundreds of desirable traits in the workplace. Of these, one of the arguably most important is to be well-rounded in the workplace. Many skills can assist an individual in being a well-rounded employee. Oral communication skills, written communication skills, teamwork, technical skills, leadership skills, adaptation skills, computer skills, interpersonal skills and analytic abilities are some of the key factors to a well-rounded employee. While these skills all seem to be of equal ability to the well-rounded employee, the scope of this paper will only delve into a few of the skills preceding. Technical skills, oral communication skills and leadership skills will all be detailed in this review in an attempt to help you (the reader) become a well-rounded employee.
Technical Skills in the Workplace
Technical skills are the formal name for the knowledge to perform the task at hand. One acquires technical skills by training in formal school systems or in the work environment. Experience is probably one of the most important factors in growing your technical skill in a subject. The importance of technical skills in the workplace is undeniable. Without the knowledge of the subject at hand, there is virtually no way possible to be a well-rounded person. Without technical skills you are not likely to be able to even do the job at hand.
Heres a brief story for example about an individual in the workplace, and how his lack of technical skills hindered his ability to be well rounded in the workplace, and eventually cost him his job. John was a college graduate with a degree in Computer Science. He had completed his degree with a GPA of 3.5. He began his search for a job immediately following his graduation and landed a great job in the technology field based on his merit and because of his professionalism and great communication skills. However, John had very little practical knowledge that is used in the IT field. He had no past experience beyond his degree, no industry level certifications. While his education had trained him in many different facets of computer technology, he lacked the one driving technical skill to help him determine where he would be best suited. It turns out the job that he landed was in computer networking, and when his initial review came up in 3 months, the company decided that they were going to let him go because of his lack of technical skills and because they wanted someone more experienced who actually knew how to do the job. Because of Johns failure to be a well-rounded employee, and failure to have technical skills, he lost his job.
But the big question is: What does John do now? How can he obtain the technical skills that he lacked before so that he will be able to keep his next job? John was mislead into believing that his degree would provide him with all of the information that he needed in order to compete in the workplace. One of the options that he should have heavily considered in college is internships. Now he decides that he will take some certification level courses to help get him on track. He uses these courses to build his confidence level and his technical skill set. He excels in his next job because his great oral communication skills are a wonderful supplement to his technical knowledge.
Oral Communication Skills
For obvious reasons, technical skills are important in the workplace. Likewise, oral communication skills are a no-brainer in the workplace as well as in our personal lives. The National Association of Colleges and Employers conducted a survey of hundreds of employers to determine the skills they desire in potential employees. The result showed overwhelmingly that oral communication skills were the most important to the sample set (http://ustudies.semo.edu/oralcom/importance_oc_skills.htm).
Based on this information and common sense, one can easily see the importance of oral communication skills in the workplace. It is vital than individuals be able to express themselves and understand others to be successful in the work environment.
There are several oral communication skills that are the keys to positive communication.
Positive attitude and a genuine regard for others
Openness and willingness to share about oneself
Sense of humor
Interpretation (Getting the right message)
There are oodles of books on the subject of oral communication. Many of these books will help a reader understand how to build oral communication skills. One of the major issues is not shying away from speaking opportunities and actually practicing and taking time to build on your communication skills. Great oral communication skills start on the inside. You need to have great self-esteem and a great sense of self in order to be an effective oral communicator.
Often people lacking in oral communication skills are not only not well-rounded in the workplace, they also have a hard time getting a position in the workforce. This is especially the case in which customer service and customer relations are involved. This shows with obvious reasons the importance of begin a great oral communicator.
Lets take a look at Frank in our second example. Frank had worked at an organization for nearly 2 years. He was known as being somewhat shy and reserved. He has otherwise performed very well on the job. He is great at the technical job that he performs. Recently, a vice president of the company came to the local office to discuss the budget situation in Franks department. Frank stumbled and fumbled when he was having the discussion with the VP, and made himself and the entire department look bad because of his inability to effectively communicate what was going on in the department. It turns out that the VP was visiting to determine which departments could be eliminated since the company was going through budget cuts. While Franks department was actually a great asset to the organization, it was chosen for elimination primarily because of Franks failure in the oral communication realm.
As the last topic to help our reader become a well-rounded employee, leadership skills are essential. Being able to lead is essential for more than just being well-rounded. Leadership roles typically provide higher compensation and are typically management positions or plateaus in the workplace. Being a leader is important because it provides the individual with a sense of self-accomplishment. In general, leaders help direct the organization in the path that the organization needs to go. Leadership skills like all other skills reviewed in this paper need to be practiced and developed. There are many courses and course study programs to assist one in becoming a great leader. Once again, similar to communication skills, often leaders are born. Leadership skills are character traits that people can be born with.
One of the best ways to improve your leadership skills is by simply putting yourself in the position to be a leader. When a team project or role is offered, it is important that you put yourself in the best position to accept the role and give yourself the opportunity to lead. If you shy away, you will not have the opportunity to learn and improve your leadership skills.
Many books are also written on the topic to help one excel as a leader. For now, lets look at an example of how leadership skills can be used to help an individual be successful in the workplace.
Susan, a friend of mine was provided with the opportunity to take a role in a team leadership position because of her great oral communication skills and her technical knowledge of the task at hand. When given the opportunity, she rose to the occasion and took the leadership spot. Using her communication skills to help organize the team and help the team reach micro goals, the team eventually completed the project in due course and Susan was placed in a management position because of the excellence with which she lead the team to success.
In the example above we can clearly see the benefits of being a great leader in the workplace as well as the benefits of balancing the three skills that we have detailed.
In conclusion, you should have garnered a few important bits of information here. You should realize the importance of having the technical skills in the workplace, having great oral communication skills, and having great leadership skills. However, remember that there are also many other important traits in the workplace. Weve talked about some of the others in the introduction.
The most important piece of information that you should learn is that success is driven not by just being the greatest in one of these skills, yet a pleasant balance of all are what will help you get there. Examples of this were shown earlier. Frank had great technical skills that had helped him excel in the workplace. However, he lost his job because of his inability to effectively use oral communication to convey this message to upper management. Sarah was able to gain a promotion by showing a pleasant balance of all three traits that were discussed today.
The most important message that I hope that you gather from reading this today is without a doubt be the importance of being a jack of all trades in the workplace. In todays competitive business world, the importance of balancing these and other positive work skills is greater now than ever before. |
a method of sawing logs or timbers, as into boards, in which the piece to be cut is laid horizontally across a pit and cut by a saw operated vertically by two people, one above and one in the pit below the piece.
Origin of pit sawing
First recorded in 1905–10
Dictionary.com Unabridged Based on the Random House Unabridged Dictionary, © Random House, Inc. 2019
Examples from the Web for pit sawing
Historical Examples of pit sawing
The Yahgans were employed on road-making, chopping, pit-sawing and other work of the hardest kind.The Gold Diggings of Cape Horn
John R. Spears |
There are many kinds of compact cylinders available in the marketplace—these pneumatic actuators have been shortened relative to standard pneumatic cylinders. In fact, they can be as much as 50% shorter than the normal cylinder, but still maintain the capacity to exert the same force as their larger counterparts.
The “Pancake cylinder” was the original compact pneumatic cylinder. It was invented by Al Schmidt in 1958, to fill a need for force in a tight, enclosed space. The basic intent was to get the most stroke in a short overall length using common machined parts and seals. Through the years, this design has been further developed, with many features and options to satisfy an extreme variety of customer applications.
This round body cylinder has a smooth, clean outside diameter for ease of machinery cleaning. Even though it was initially used for strokes less than 1-inch, manufacturing methods have allowed increased strokes to as much as 4 inches. Non-metallic rod bushings and piston bearings can accommodate extreme or unforeseen loads to provide long-term durability.
When selecting a compact cylinder, the following application data is needed:
• Operating pressure in pounds per square inch
• Force required
• Preferred mounting and footprint, and
• Spring return or double acting
There are some other items that you may want to consider when selecting a pancake cylinder. These can include:
• Ambient temperature
• Media temperature
• Excessive loads other than required axial force, and
• Load guiding requirement. |
goethite iron ore beneficiation - tfgbeneficiation goethite fines - gatewaypreschool. Innovative Technique to generate saleable Iron Ore Fines from beneficiation plant Tailings goethite and martites are in minor amounts and martitised magnetite is. Get More Info. image.Mineralogy and textural impact on beneficiation of goethitic ore .It occurs as vitreous goethite and ochreous goethite. It is formed during chemical weathering of iron ores and banded iron formation. It can incorporate variable amounts of other elements such as Al, Mn, P, Si, Cd, Ni, Cr, V, Zn and Co in the crystal structure due to its adsorption capacity [1–3]. The processing of these types of.
Investigation on beneficiation of goethite-rich iron ores using .Dec 17, 2017 . Iron ore minerals can be beneficiated and extracted from their respective ores such as magnetite, hematite, limonite, siderite and goethite. Magnetic properties of the ores can be altered through their hydroxyl groups. Magnetite has high magnetic susceptibility followed by hematite and goethite. Processing.Investigation on beneficiation of goethite-rich iron ores using .In this study, reductive roasting followed by low-intensity magnetic separation were used to upgrade iron ore from Gua mines in Jharkhand. The work aimed to maximise the recovery of iron values by upgrading to a high-grade product suitable for pelletisation and sintering. The received ore contains 58% Fe, 7.82% silica,.
Beneficiation of low grade, goethite-rich iron ores by . - ResearchGate
goethite iron ore beneficiation,goethite iron ore beneficiation - tfg
goethite iron ore beneficiation,Investigation on beneficiation of goethite-rich iron ores using .
Iron Ore Beneficiation - Indian Bureau of Mines
Understanding the effects of goethitic iron ore - Process productivity
characterization and its implication on beneficiation . - EprintsNML
that iron ores are hematite separation - Mineral Processing,Ball Mills .
Iron - Department of Energy
Iron ore classification and the future of resource development .
Minerals | Iron Ore
Development of process for beneficiation of low-grade iron ore .
Mineralogical Constraints in Beneficiation Of Low Grade Iron Ores of .
hematite and goethite inclusions in low-grade dolomitic banded iron .
USGS Minerals Infromation: Statistical Compendium - IRON ORE
Iron Ore Flotation, Reverse Cationic Flotation, Reverse Anionic .
Phosphorus Removal from Goethitic Iron Ore with a Low . - j-stage
Iron Ore Beneficiation | SRK News - SRK Consulting
Lean Iron Ore Beneficiation in India - SlideShare
Assessing the Accuracy of Quantitative XRD with Aluminous .
goethite iron ore beneficiation,Loss On Ignition measurement for oxidizable iron ores with C / H .
Recovery Improvement of Fine Iron Ore Particles by . - CiteSeerX |
Researchers develop new chemistry to make smart drugs smarter
A method to activate targeted drugs, or smart drugs, only at the selected site of action, an approach that improves the drug's therapeutic effect and minimizes side effects, has been developed in a study led by Georgia State University.
Smart drugs, developed to improve the delivery problems of pharmaceutical drugs, are like guided missiles with warheads. They need a targeting molecule to guide pharmaceutical drug molecules to the desired site of action and a trigger to "drop the bomb" or release or activate the drug. In chemistry terms, such smart drugs are conjugates, or links, between a targeting molecule and a drug molecule.
For the most part, the issue of guiding and enriching such smart drugs to the desired site of action has been resolved. An example is the use of antibody-drug conjugates, an emerging class of cancer treatment that targets the delivery of drugs to cancer cells. However, the issue of when and how to trigger drug release, particularly at a sufficiently high concentration, has been a challenging task.
This study introduces new chemistry and a new concept to allow for "enrichment-triggered activation" of the drug molecule after delivering the smart drug to the desired site of action. The study tested doxorubicin, an anti-cancer drug, and carbon monoxide, an anti-inflammatory agent, using this delivery method and found the targeted approach effectively treated diseases such as acute liver injury in mice and cancer in cell culture. The researchers linked the active drug to a targeting molecule and then triggered the release of the drug at the desired site of action. The findings are published in the journal Nature Chemistry.
"The general idea is we have a targeting molecule that is conjugated to a payload (pharmaceutical drug molecule), and in between, there's a linker," said Dr. Binghe Wang, Regents' Professor of Chemistry and director of the Center for Diagnostics & Therapeutics at Georgia State, a Georgia Research Alliance Eminent Scholar in Drug Discovery and a Georgia Cancer Coalition Distinguished Cancer Scholar. "The entire purpose of this is to enrich drug concentration at the site of action. This allows a higher concentration of the drug at the site of action, but minimizes the concentration elsewhere. Essentially, it's almost like a guided missile.
"What we have developed is an approach called enrichment-triggered prodrug activation. Most other chemical approaches rely on some kind of linker chemistry that is not specific enough or there's a premature release in the general circulation. What we have essentially is a way to control release once the concentration of the drug reaches a certain level. Let's say you have someone who has prostate cancer. If the drug concentration at the prostate can be a hundredfold higher than the concentration in the bloodstream, chances are you can probably kill all the cancer cells without causing all these side effects."
In this study, the researchers used this targeted drug delivery approach to administer carbon monoxide to mice and treat acute liver injury. They saw a very potent effect, maybe 10 to 30 times more effective than traditional drug delivery, Wang said. They also tested the anti-cancer drug doxorubicin in cell culture.
They found it's necessary to use a very stable linker to connect the targeted molecule and active drug so the linker can remain steady as it circulates in the bloodstream. They also needed to trigger a mechanism to release the drug at a desired site of action.
"The linker chemistry design has been very tricky," Wang said. "There's a lot of effort that went into it. What we have is something very unique in the sense that we have designed an approach that is not based on typical chemistry. When the (drug) concentration reaches a certain level, then it will automatically start releasing very quickly."
While this study's targeted drug delivery approach resembles that of antibody-drug conjugates, which target an antibody (a protein that recognizes foreign substances) on the surface of cancer cells, the current approach doesn't require having antibodies.
"There are many other molecules that one can use to target different kinds of tissues, diseased organs or sites," Wang said.
More information: Yueqin Zheng et al. Enrichment-triggered prodrug activation demonstrated through mitochondria-targeted delivery of doxorubicin and carbon monoxide, Nature Chemistry (2018). DOI: 10.1038/s41557-018-0055-2
Provided by: Georgia State University |
Fracking is the future according to Lord John Browne, the former chief executive for BP. He is known as the fracking czar due to his enthusiasm to bring the process to Great Britain.
Hydraulic fracturing, known as fracking, uses a combination of water, sand, and chemicals to separate shale from natural gas and oil. Lord Browne wants to take fracking to a new level in Great Britain.
According to the British Geological Survey, the Bowland-Hodder formation in England’s midsection contains shale deposits with more than 1,300 trillion cubic meters. Within that shale are deposits of oil and natural gas. With fracking, Great Britain will have enough oil and natural gas for 40 years.
He believes fracking is a secure domestic energy source that will create a plethora of new jobs and generate billions in tax revenue. Lord Browne considers fracking a better alternative to constructing nuclear plants and importing natural gas with the potential of transforming the entire country.
Besides being a director in the British government’s Cabinet Office, Lord Browne currently holds the position of chairman of Cuadrilla Resources, a company ready to drill wells in the Bowland-Hodder formation. Later in the year, the British government will issue oil and gas exploration licenses to drill wells in England, Wales, and Scotland.
Lord Browne is also a board member with Riverstone, a private-equity firm that has invested $27 billion into energy companies. In 2010, Riverstone spent $58 million to acquire a 41 percent stake in Cuadrilla.
Lord Browne has spent the better part of his life working with oil. He earned a degree in physics from the University of Cambridge in 1969. After graduating, he joined BP as a field engineer and helped developed Alaska’s North Slope. For the next 25 years, he managed exploration and production projects in the North Sea and Gulf of Mexico. For his work in the oil industry Queen Elizabeth II knighted him in 2001 making him Lord Browne of Madingley.
Michael Fallon, Britain’s Energy Minister said that one way or another shale gas production is coming. It would be more beneficial if that production comes from domestic sources than from imports.
Fadel Gheit, an oil industry analyst at Oppenheimer, said environmentalists should approve of natural gas exploration. Composed mostly of methane, natural gas emits half the carbon dioxide of coal. Gheit believes Lord Browne has taken the right course. Fracking will transform the future of energy. Europeans will have to accept shale extractions as necessary for domestic development.
Matthew Spencer, director of Green Alliance, contends burning more natural gas would limit attempts to alter climate change along environmental lines. If the development of natural gas does not come with constraints, fracking will be disastrous.
Another problem is that British landowners do not own the mineral, oil, and gas rights beneath their property. The British Crown does. That means companies such as Cadrilla cannot win local support by paying royalties for drilling rights.
For Lord Browne, domestic fracking is better working in unfamiliar territories with unstable governments. He is confident of being able to deal with the stringent rules offered by the British government and the EU that will make fracking Great Britain’s future.
By Brian T. Yates
Bloomberg Business Week |
Electricity is a commodity that most people cannot go without. It has numerous benefits such as powering machinery, electronic devices, and lighting. It is a necessity for both industries as well as domestic users. With global warming on the rise, many governments are looking at ways of producing this essential commodity without increasing their carbon footprint. The hydroelectric energy is a clean source of energy which comes with numerous benefits as well as a shortcoming.
Hydroelectric energy is a renewable source of energy produced from water. Most hydroelectric plants use dams to collect water which is used to facilitate the production of electrical energy. There are three types of hydroelectric plants commonly used to produce this energy. This is the diversion, impounded and pumped power plants. The impounded facility utilizes water that has been stored in a reservoir to run water into turbines to produce electricity. In the case of a diversion plant, the water catchment is created around a river flow. The pumped power plants use water pump system and not dams. Get to know also about the disadvantages of hydroelectric energy.
Hydroelectric energy is an affordable source of energy because it is produced using water. This water is usually collected in reservoirs and recycled to produce power. In addition, it is a clean source of energy because the process of producing this energy does not emit harmful gases like carbon into the atmosphere. It is preferable to the fossils fuels which produce large amounts of greenhouse gases. Furthermore, unlike fossil fuels which pose so many dangers to the environment if they leak, the hydropower water is not polluted. Any leakages or precipitation of this water is recycled back into the atmosphere. Notably, this form of energy is more reliable because water is readily available.
Nonetheless, there are some downsides to this form of energy. During periods of drought, the water levels in the reservoirs go low and it becomes a challenged to produce energy. When this happens the users who depend on this power plants for electricity will have to endure rationing of electricity or perpetual blackouts. The cost of building these hydroelectric facilities is also very high. The dams and water reservoir hold large volumes of water. They also have sophisticated systems of releasing water to the turbines below and recycling them back up to the reservoirs. This costs huge sums of money to set up. The construction of this facility also interferes with the environment. for instance, it can interfere with the natural habitat of acqua life. For more info, visit this page.
Get more ideas at https://www.britannica.com/science/energy |
Concretecon•crete (kon′krēt, kong′-, kon krēt′, kong- for 1–10, 11, 14, 15; kon krēt′, kong- for 12, 13),USA pronunciation adj., n., v., -cret•ed, -cret•ing.
- constituting an actual thing or instance;
real: a concrete proof of his sincerity.
- pertaining to or concerned with realities or actual instances rather than abstractions;
particular (opposed to general): concrete ideas.
- representing or applied to an actual substance or thing, as opposed to an abstract quality: The words "cat,'' "water,'' and "teacher'' are concrete, whereas the words "truth,'' "excellence,'' and "adulthood'' are abstract.
- made of concrete: a concrete pavement.
- formed by coalescence of separate particles into a mass;
united in a coagulated, condensed, or solid mass or state.
- an artificial, stonelike material used for various structural purposes, made by mixing cement and various aggregates, as sand, pebbles, gravel, or shale, with water and allowing the mixture to harden. Cf. reinforced concrete.
- any of various other artificial building or paving materials, as those containing tar.
- a concrete idea or term;
a word or notion having an actual or existent thing or instance as its referent.
- a mass formed by coalescence or concretion of particles of matter.
- set or cast in concrete, to put (something) in final form;
finalize so as to prevent change or reversal: The basic agreement sets in concrete certain policies.
- to treat or lay with concrete: to concrete a sidewalk.
- to form into a mass by coalescence of particles;
- to make real, tangible, or particular.
- to coalesce into a mass;
- to use or apply concrete.
Countertopcount•er•top (koun′tər top′),USA pronunciation n.
- a counter, as in a kitchen, esp. when covered with a heat- and stain-resistant material.
- designed to fit or be used on a countertop: a countertop microwave oven.
counter1 + top1]
Sealerseal•er1 (sē′lər),USA pronunciation n.
- an officer appointed to examine and test weights and measures, and to set a stamp upon such as are true to the standard.
- a substance applied to a porous surface as a basecoat for paint, varnish, etc.
Howdy peoples, this image is about CHENG Concrete Countertop Sealer ( Outdoor Concrete Countertop Sealer #7). This image is a image/jpeg and the resolution of this photo is 960 x 960. This photo's file size is only 82 KB. If You decided to download It to Your computer, you could Click here. You might too download more attachments by clicking the following picture or see more at here: Outdoor Concrete Countertop Sealer.
CHENG Concrete Countertop Sealer ( Outdoor Concrete Countertop Sealer #7) in a room, it surely demands careful computation and cautiously. Placement of furniture-made at random will have a direct effect on the room that felt unpleasant and crowded's condition, therefore it is incapable of create a lovely aspect of the bedroom. One particular furniture is available in an exclusive bedroom like a room is actually a dressing table.
Dressers combined functionality could be the appropriate choice in case your room features a measurement that is not-too intensive. For instance, dressing table which may concurrently be a desk or it is possible to choose a counter equipped with plenty of cabinet drawers to allow them to be properly used being a database for other knick knacks.
Suitable position that is dressers can jack your private rooms' beautiful side up. If you assess the first area that will be filled by furniture dressers before investing in a dresser, it'd be wonderful. It's vital that you avoid the purchase of the dressing table that exceeds land's portion obtainable in the room. |
Manufacturing plants of Hutchinson in Poland produce low pressure fluid tubing. Low pressure fluid tubing are made of rubber and plastic. Tubes transfer fluids like: oil, fuel and water.
Essential group of products:
– Fuel tubes made of plastic
– Wires for engine cooling
– Tubes for air vacuum systems
– Wires for clutch and brake assist
– Fast-connectors for fuel pipes and water
– Rubber hoses for coolers
– Rubber hoses for engine cooling
During summer in temperate climate, the temperature in manufacturing hall rises slightly. It is because of heat gains delivered by machines and sun heated roof. Temperature in production hall can exceed 35*C. Overtemperature can cause health problems and effectiveness loss. According to scientific studies temperature of 35*C lowers effectiveness of production by 16% compared to work in optimal temperature of 23*C [‚COST BENEFIT ANALYSIS OF THE NIGHT-TIME VENTILATIVE COOLING IN OFFICE BUILDING’ Helsinki University of Technology; Lawrence Berkeley National Laboratory].
Lots of production centers have ventilation systems but they are not able to lower the temperature during hot summer day. To efficiently lower the temperature during summer there have to be installed evaporative coolers. Evaporative coolers not only lower the temperature for about 10*C but also blow fresh and clean outdoor air. Lowered temperature and big air flow provide good working conditions. Installation of coolers in Hutchinson Poland is presented on below pictures. |
On Saturday morning, the most successful rocket the United States has ever developed flew its final mission. During the pre-dawn hours, United Launch Alliance's Delta II rocket lifted NASA's ice-monitoring mission ICESat-2 into space. It was a bittersweet moment, as the Delta II's retirement marks both a step into the future of US rocketry, while representing a definitive break with the past—and the very origins of US spaceflight.
"Historic day," the chief executive officer of United Launch Alliance, Tory Bruno, said on Twitter. "Retired the shark, Delta II and the mighty Thor." The "shark" was a reference to the shark teeth painted on the payload fairing of GPS launches, an homage to the "Flying Tigers," American volunteer pilots who helped defend China from Japan in 1941 and 1942.
The Delta II rocket can trace its heritage back to the dawn of serious US rocketry, with the Thor intermediate-range ballistic missile that was developed by the US Air Force in the 1950s and first deployed in Great Britain in 1959. This 20-meter rocket, designed to carry thermonuclear weapons, served as a template for the original Delta rockets.
The first Delta rockets were used by NASA and other government agencies to launch satellites during the 1960s and 1970s. During the 1970s, at the insistence of the White House, the Air Force agreed to work with NASA and its space shuttle program to fulfill the military's launch needs. The shuttle’s first Department of Defense flight launched in June of 1982, but after the space shuttle Challenger accident in 1986, the Reagan administration issued a National Space Launch Strategy that directed the military to develop its own rockets.
This led to an upgraded Delta rocket, with a longer version of the Thor fuel tank, which became known as the Delta II rocket. It first launched in 1989 and has since flown 155 successful missions, with only one total failure. The original goal of the fleet was to launch GPS satellites for government and, later, civilian use. So you can thank the Delta II the next time you check your phone's map for directions.
As useful as the rocket has been for the military, however, it has also been a proven workhorse for NASA. Of the rocket's 155 successful launches, 54 were conducted for the space agency, including eight robotic missions sent to Mars. These missions include the first Martian rover, Pathfinder, in 1996; Spirit and Opportunity in 2003; and the Phoenix lander in 2007.
"We're honored that our customers have trusted us with a lot of critical missions over the years," Scott Messer, the program manager for the Delta II program for United Launch Alliance, told Ars in an interview.
So why retire the most successful rocket in US history? Because time moves on. Before its retirement, no rocket other than the Russian Soyuz booster had remained active longer than the Delta II rocket. However, in recent decades more-capable, less-expensive options have emerged.
A decade ago, the US Air Force already sought to transition to the more powerful Delta IV and Atlas V rockets, which are also built by United Launch Alliance. Since 2012, the Delta II has averaged fewer than one launch a year, and its cost has escalated due to the need to keep production lines open for so few missions.
Moreover, the Falcon 9 rocket built by SpaceX has also pressured the Delta fleet. At $60 million, it costs significantly less than the Delta II booster, with three to four times the capacity in terms of tonnage to low-Earth orbit.
Listing image by United Launch Alliance |
History of the grinding machine: a historical study in tools and .Title, History of the grinding machine: a historical study in tools and precision production. Volume 2 of Technology monographs. Historical series · Issue 2 of Technology monographs (Massachusetts Institute of Technology).: Historical series · Issue 2 of Massachusetts Institute of Technology. Technology monographs: his.Grinding's - Ace Grindingthe oldest manufacturing process. De- spite its long history, grinding is often seen as shrouded in mystery because of the numerous cutting points and their ir- regular geometry, the small DOCs that vary from grain to grain and the sorcer- ous sparks shooting off the wheel. Grind- ing a workpiece that's not supported between.
Grinding machine | BritannicaMetal being cut on a lathe. machine tool: History …of machine tools, that of grinding machines. C.H. Norton of Massachusetts dramatically illustrated the potential of the grinding machine by making one that could grind an automobile crankshaft in 15 minutes, a process that previously had required five hours. READ MORE.History of the Centerless Grinder - Total Grinding SolutionsMay 1, 2015 . The centerless grinder boasts a long and sometimes contentious history. During the machine's early days, it wasn't unusual to specify that parts “not be ground on a centerless grinder.” But today's CNC centerless grinding machines achieve tolerances of 10 millionths of an inch, making them a necessary.
histry grinding machine pdf,How grinding wheel is made - material, history, used, parts .
Development of Desktop Multipurpose Grinding Machine for .
histry grinding machine pdf,Grinding Process Instability | Journal of Manufacturing Science and .
histry grinding machine pdf,Untitled - CDC
Company history - Schaudt
History - Blohm
Grinding - SME
History - Okamoto Corporation
A complete collection: Cutting-Off & Grinding Wheels, Diamond .
Company history - Studer
Company history - Mikrosa
Electrolytic In-Process Dressing (ELID) Grinding for Nano-Surface .
Vane Grinding - blade grinding | The Grind Magazine
Grinding Machine Manufacturers | G&P Machinery
CYLINDRICAL GRINDING MACHINES - FERMAT Machine Tool
History of corn milling - Survivor Library
Grinding of silicon wafers - CiteSeerX
MEM07008D Perform grinding operations - Training
histry grinding machine pdf,Optical Grinders - Amada Machine Tools America, Inc
University of Groningen Fundamentals of grinding Hegeman, J |
We are drowning in plastic. Science Magazine has reported that as many as 8.3 billion tonnes of virgin plastic have been produced to date, with the vast majority of this ending up in landfill or the natural environment where it will take centuries to break down. In our seas and oceans, a World Economic Forum report has said that plastics will outnumber fish (by weight) by 2050. Plastics manufacturers’ reliance on climate-destroying fossil fuels as their base ingredient, with added toxic chemicals, lead to incalculable impacts on our bodies and the environment. It is hard to exaggerate the scale and urgency of the challenge facing the world, to substantially reduce our plastics use, and to ensure we reuse and recycle as much and as quickly as possible.
For decades the plastics industry has preferred to focus on litter and its clean-up, rather than addressing some of the more fundamental issues such as product design, manufacturing processes, and developing re-useable or non-plastic alternatives. Corporate Europe Observatory recently exposed how the packaging industry and its customers in the food and drink sector support various anti-litter campaigns across Europe, to both greenwash single-use plastic packaging with the veneer of environmental respectability, but also to try to shift responsibility for tackling plastic waste onto local authorities and citizens.
In the run up to the publication of the much-anticipated Plastics Strategy, during 2017 highly active corporate lobbyists were granted the lion’s share of access to Commission officials. The Commission also proactively reached-out to key plastics industry lobby groups, in order to try to secure voluntary industry commitments to support its own targets, but the industry response was lacklustre. Since the publication of the Commission strategy, industry has been given further opportunities to make voluntary commitments to boost the amount of recycled content in plastic products, but again with little result. Industry remains opposed to key elements of the Strategy.
As the Juncker Commission enters its final year in office it is make or break time for its Plastics Strategy commitments. It is time to recognise that the collaborative approach with industry has failed and to deliver on its promises for ambitious new legislation.
Industry’s lobby access on Plastics Strategy
Ever since the Commission launched the roadmap towards its Plastics Strategy in January 2017, plastics has been a hot topic for industry lobbyists who have poured through the doors as officials started the delicate process of drafting the proposal.
In the 12 months from the launch of the roadmap to the publication of the final strategy in January 2018, officials in the two lead departments, DG Environment and DG Grow (which is responsible for industry and the internal market), held 44 lobby meetings on the Plastics Strategy and 89 per cent (39 meetings) were with industry. Only 3 of meetings were held with NGOs and 2 with others. Adding in meetings with Commissioners, cabinet members, or directors-general, plus those held with Secretariat-General officials, brings the total number of Commission lobby meetings on the Plastics Strategy in the year before it was published to 92. Of these 92, 76 per cent (70 meetings) were with corporate interests and only 17 per cent (16 meetings) with NGOs. This data can be viewed here.
PlasticsEurope led the charge with a total of 13 meetings with Commission. These included 7 with officials, and a 4 further meetings to discuss the process to develop voluntary commitments by industry to be included within the Commission’s own strategy (see below). PlasticsEurope is one of Brussels’ biggest lobby groups, deploying the equivalent of 8 full-time lobbyists and holding the same number of European Parliament access passes. Its members include all the big names in chemicals and petrochemicals: BASF, Borealis, Dow Europe, ExxonMobil Chemical, Ineos, Novamont, Solvay, and many others. PlasticsEurope’s self-declared EU lobby spending figures indicate it has an annual lobby budget of €1,500,000 - €1,749,000 (2016 figures; 2017 figures not yet available). It shares an office building with CEFIC, the European Chemical Industry Council, with whom it shares many interests and collaborates closely. 2015 leaks of PlasticsEurope’s and CEFIC’s lobbying strategies on chemicals in plastics exposed how the undermining of science had been an effective advocacy strategy. CEFIC is one of Brussels’ highest-spending lobbyists and in 2017 had 5 meetings with the Commission on the Plastics Strategy alone.
Additionally, a series of ‘stakeholder’ workshops were held during 2017 by DG Environment (eight workshops) and DG Grow (three workshops) on a variety of topics including single-use plastics, packaging, marine litter, deposit schemes, and others. While the attendance lists have not been provided for all of these workshops, the information available indicates that corporate interests again dominated and that PlasticsEurope attended at least some of them.
There were several other corporate gatherings on plastics in 2017. One took the form of a breakfast meeting for industry chief executives with Vice-President Frans Timmermans and was held before the Commission-hosted conference Reinventing plastics – closing the circle in September 2017. Attending this private breakfast were some familiar corporate names including PlasticsEurope. Another event was a round-table with Vice-President Katainen in March 2017 where PlasticsEurope and others were again present.
Of course, NGOs were active in 2017 in the run-up to the launch of the plastics strategy, but given industry’s overwhelming access, the Commission cannot have been in any doubt about the demands of corporates with an interest in the plastics supply chain.
Voluntary industry pledges go missing
A leak of the European Commission’s draft Plastics Strategy found its way into the public domain in October 2017, with an eye-catching line at the top of page six: “[add here possible announcement of voluntary commitments by Plastics Europe]”. Yet when the final strategy was published in January 2018, there were no industry commitments included. What happened?
Over several months the Commission, specifically DG Environment and DG Grow, had been negotiating with industry players to produce some voluntary commitments for inclusion in the strategy, specifically PlasticsEurope (whose industry members produce 90 per cent of the polymers produced in Europe), European Plastics Converters Association (EuPC, which represents plastic manufacturers and processors), and Plastics Recyclers Europe (PRE, representing the plastics recycling industry). The Commission had already committed itself to a target of all plastics packaging to be recyclable by 2030 and was keen to secure industry cooperation.
The Commission must have thought this was a promising strategy. After all, several cross-industry platforms had been set up and visibly championed by industry with a view to developing some voluntary pledges and actions, including on polyolefins and on polystyrene. But voluntary targets and commitments represent a classic lobbying tactic, now adopted by the plastics industry: they can produce the appearance of taking action on an issue and crucially, might just convince public authorities not to implement tough regulation and mandatory rules. However, light-touch regulatory approaches by the Commission, for example for the diesel car industry or alcohol labelling, have proved to be deeply and notoriously problematic.
Correspondence and minutes released by the Commission to Corporate Europe Observatory via access to documents requests show that there was a lack of common agreement between industry players on joint commitments to be included within the Plastics Strategy. In November 2017, an email exchange between the Commission and industry players indicates that the Commission was eager to discuss voluntary commitments with industry: “The sooner we, all, will be able to meet and discuss, the better!”. Yet there is a split within industry: while PlasticsEurope was producing drafts (the eighth version was circulated), the EuPC appeared to want to distance itself from some elements.
At the same time DG Environment emailed PlasticsEurope’s Executive Director in quite frank terms about the draft document: “Thank you for sharing this "work in progress", which we got just in time to reassure our upper levels of hierarchy that your efforts are ongoing and congratulations for the work done so far. However let me also share our feeling that there is still a long way to go until this would become the "ambitious and credible" commitment that we are looking for from value chain participants.”
Two further meetings were held with industry in December 2017 but meeting reports make clear that “full alignment hasn't been reached” and that while the Commission was welcoming progress made, it was also stressing that “a revised version [of the voluntary commitments] with quantitative targets and precise time-frame was needed and should be shared as soon as possible”. By the time a final meeting was held on 8 January 2018 it was clear that, with the Commission’s Plastics Strategy due for imminent publication, time had run out for industry’s voluntary commitments to be included.
The Commission published its strategy on 16 January 2018, and restated its target that “By 2030, all plastics packaging placed on the EU market is either reusable or can be recycled in a cost-effective manner.” While industry’s voluntary commitments were not included within the Commission’s strategy, they were self-published on the same day.
PlasticsEurope’s voluntary commitment was substantially weaker than the Commission’s; the industry expressed the “ambition” to reach 60 per cent reuse and recycling for plastics packaging by 2030, saying “This will lead to achieve our goal of 100% re-use, recycling and/or recovery of all plastics packaging in the EU-28, Norway and Switzerland by 2040”. “Recovery” in this context means energy recovery, also known as incineration, which is controversial and highly damaging for health and the environment. PlasticsEurope’s “general commitments” include prevention of plastic pellets in the environment, and low-hanging fruit such as “raising awareness” and a focus on litter prevention. It places most emphasis on the voluntary industry platforms, although clear action plans were missing. As Politico pointed out at the time, PlasticsEurope said it would “increase efforts” to make it easier to reuse and recycle plastics but made no firm commitment to boost the use of recycled content in products.
The voluntary commitment by other industry players, including EuPC and PRE, also appears to be 10 years behind that proposed by the EU. While the Commission’s target is that “By 2030, more than half of plastics waste generated in Europe is recycled”, industry says it will “launch Circularity Platforms aiming to reach 50% plastics waste recycling by 2040”, although producers of PET plastic (often used in bottles) specifically committed to recycling and reusing of 65 per cent of its packaging by 2030.
Industry fails to pledge action
As the Commission could not include industry’s voluntary commitments in its strategy as originally planned, and crucially neither industry pledge included commitments on minimum content targets which the Commission had been pushing for, the Plastics Strategy announced a pledge scheme to encourage a wider set of corporate players to make commitments to boost the uptake of recycled plastics. The Commission’s objective is to ensure that by 2025, 10 million tonnes of recycled plastics find their way into new products on the EU market.
However, at least by early May 2018, more than three and a half months on from the launch of the Strategy, no pledges had been received, although a number of companies were said to be “preparing” them. Instead there have been lobbying calls from BusinessEurope, the corporate world’s most significant EU lobbyist, for there to be “flexibility” on the 30 June 2018 deadline, with a strong expression of support for such voluntary approaches. When BusinessEurope gets involved, it usually means that it is considered an issue for industry across sectors.
The Commission’s strategy made clear that “Should the [pledged] contribution[s] be deemed insufficient, the Commission will start work on possible next steps, including regulatory action.” In other parts of the world such as California, legislation is in place to require 25 per cent minimum recycled plastics content in new packaging products, providing a clear goal and a boost to the recycling industry. Given the urgency of the situation, a legislative approach, as opposed to reliance on industry promises, is far more likely to deliver change.
Industry opposed to single-use bans and new producer fees
One of the most talked-about elements of the Commission’s approach is its commitment to introduce legislation on single-use plastics, likely to take the form of a list of products (such as coffee stirrers, cotton buds, drink straws, and others) which would be banned outright, and other products for which member states would be obliged to substantially reduce use and oblige industry producers to pick up the costs of their collection and treatment, so-called ‘extended producer responsibility’ schemes. This follows the success of existing EU rules to cut plastic bag use which, for example in England, has reduced the number of bags used by 80 per cent.
The Juncker Commission is on a tight deadline to publish a legislative proposal on single-use plastics before the end of May 2018 when it will enter its final year in office and time will start to run out for the EU’s law-making process. The proposal is expected to be signed-off at the Commission College meeting on 23 May.
Under the Commission’s so-called 'Better Regulation' decision-making process however, all new legislative proposals must be accepted by the Regulatory Scrutiny Board and that requires an impact assessment. The 'better regulation' system – put in place after heavy lobbying by business associations – has been criticised as a mechanism for corporations to weaken or hamper the introduction of new rules that could affect their profit margins. Sure enough, in April 2018 according to Politico, the Regulatory Scrutiny Board “slammed” the Commission’s proposal on single-use plastics due to “weak data”, among other reasons. More recently, the Regulatory Scrutiny Board is said to have now accepted a revised impact assessment for the single-use plastics proposal, although it is not precisely clear how ambitious the Commission was ultimately able to be. A leaked version received by Politico indicated that several types of single-use plastic products were in the Commission’s sights, either for outright bans, or to become part of ‘extended producer responsibility’ schemes, but we will only know for sure as and when the final legislative proposal is published.
Nonetheless, business has lost no time in slamming the leaked proposal in familiar ways: “We are concerned about some very far-reaching proposals … Rather than a ban, it is better to focus on the current voluntary pledging campaign to make plastics more circular”, said BusinessEurope. The packaging industry lobbyist Eamonn Bates was quoted as saying: “Marine litter is a major problem and it must be tackled. But product bans are not the solution … This Commission is simply looking for a few ‘fall guys’ for the press headlines rather than action based on evidence.”
Politico also said that businesses were warning that “putting pressure on producers with a ban and extending the scope of producers’ responsibility to littered items” will “discourage them to engage with the voluntary campaign”, surely another sign that they intend to play hardball.
Plastics tax off the agenda
Given industry's obstructing tactics on other elements, unsurprisingly it is fighting back against another element of the Commission’s plastics strategy, namely a plastics tax.
A plastics tax was first mooted by Commissioner Oettinger who is responsible for the EU budget and is concerned about how to deal with the UK-sized hole in member states’ EU contributions post-Brexit. The final published Plastics Strategy did include a vague commitment that the Commission would “explore the feasibility of introducing measures of a fiscal nature at the EU level”.
Such a tax is clearly opposed by the plastics industry and its allies. PlasticsEurope said: “We do not believe that this would be reasonable.… A new tax on plastics, plastic products or products containing plastics would be very complicated. In the end, the consumer would have to pay for it.” BusinessEurope also did not mince its words in a March 2018 letter to the Environment Minister from Bulgaria (which currently holds the rotating EU Council Presidency): “In our view, if there is one way to kill off appetite for investment in research and innovation in [sic] circular economy, it is to raise a non-material tax of which the revenues flow into the general state coffers”.
The Commission hosted a ‘stakeholder’ workshop to explore the options for a plastics tax in March 2018 but industry players who attended were very “defensive” on the matter. A Commission document, seen by Corporate Europe Observatory, summarised views from this event thus: “The introduction of a dedicated new tax at EU level would be problematic from a competitiveness and subsidiary angle”. As the majority of those attending were NGOs who were largely positive about the idea of a plastics tax, this seems a strange way to summarise the debate! Certainly now an EU-wide tax on new plastics does not look likely anytime soon; disappointingly, Oettinger’s May 2018 EU budget announcement only proposed introducing a “national contribution calculated on the amount of non-recycled plastic packaging waste in each Member State”, which places the emphasis on recycling rather than a reduction in overall plastic production.
Conversely, while opposing a plastics tax, industry has not been shy to request public funding to support its initiatives. For example PlasticsEurope’s voluntary commitments came with the caveat that the EU should “mobilise public funding (such as Horizon 2020) to encourage and stimulate innovation in plastics”, a demand echoed in at least one meeting with the Commission. This is despite the fact that under Horizon 2020 (the Commission’s seven-year research programme) over €250 million has already been made available for plastics research, with an additional €100 million on its way for 2018-20, through which industry and other sectors benefit. As part of its development of the successor programme to Horizon 2020, the Commission is considering scaling-up research funding for targetted “missions”. “Plastic free oceans” is one such mission proposal, and if agreed, could see funding of up to one billion euros, with industry again likely to be among prime beneficiaries of these public funds.
Uncoordinated commitments, coordinated messaging
While different industry players were not able to come up with a coordinated package of voluntary commitments to present to the Commission for inclusion in the Plastics Strategy, they have been singing from the same hymn sheet on other matters.
Across the board, PlasticsEurope, the European Plastics Converters Association, and Plastics Recyclers Europe, have been actively arguing to maintain the EU internal market legal base for the EU Packaging & Packaging Waste Directive and all its amending acts. This might seem a technical point but is a way for industry to try to ensure harmonised rules across all 28 EU member states, and crucially also to head off action by member states which might wish to go further than the EU, say, by banning additional products.
For example when in 2016 the French Government passed a new law which included provisions to ensure all cups and plates should be made with biologically-sourced and compostable materials rather than plastic, packaging lobbyist Eamonn Bates opposed it saying, “We are urging the European Commission to do the right thing and to take legal action against France for infringing European law…. If they don’t, we will.” He has similarly criticised the Irish Waste Reduction Bill which aims to ban some single-use plastics and introduce a deposit return scheme, on the grounds that it would break single market rules on packaging and the free movement of goods.
Throughout their documents industry talks of “increasing circularity and resource efficiency”, “a more sustainable production and consumption of plastics and plastic products”, and “contribut[ing] to emissions reduction”. Of course these are good and desirable aspirations, but coming from the plastics industry are also code for ‘let’s keep on producing more plastic products’. Most of industry interprets notions of a ‘circular economy’ and ‘resource efficiency’ to mean producing lighter products, reusing resources to get more value out of them, and finally burning them to produce energy. Plastic as a lightweight product generally compares favourably with packaging alternatives when looking at, say, greenhouse gas emissions from transport. But this fails to recognise the huge amount of fossil fuels used in plastics production in the first place and the extent to which plastic packaging drives the global food economy.
There is an imperative to develop alternatives to plastic, for packaging and other applications, moving away entirely from disposable and single-use products, while considering the full environmental impact of plastic production and waste. In the waste hierarchy, while recycling and reuse are seen as positive, it is reduction of use which is the most important goal. But the plastics industry can be expected to keep fighting against any regulations which seek to reduce in the amount of plastic in circulation around the world.
Conclusion: words not deeds
The industry response to the Commission’s Plastics Strategy has not been flashy or eye-catching, and it has not come out ‘all guns blazing’ to oppose it. Rather it has been welcomed in broad terms by the plastics and other industries, who are undoubtedly aware of the pitfalls of being on the wrong side of public opinion. But in concrete terms, it is no doubt hoping to delay or even derail legislative efforts or otherwise throw a spanner in the works.
With the Commission due to publish its single-use plastics legislative proposal any day now, we will soon know the extent to which the industry has managed to throw it off track, or whether the Commission remains committed to ambitious action. Subsequently, the lobbyists will no doubt pursue the legislation down the line and shift focus to the Parliament and member states.
We know that when the Parliament and Council discussed the plastics bag ban a few years ago, they were subjected to serious industry lobbying on the topic, with lobbyists joining forces with the MEPs and member states unwilling to take action. It is likely that the industry, including producers of single-use plastic products, are already gearing up for a similar fight. |
Wind power added jobs over nine times faster than the overall U.S. economy in 2016, according to the American Wind Energy Association. According to its “2016 U.S. Wind Industry Annual Market Report,” more than 8 gigawatts of new wind power were installed for a second straight year. With the addition of nearly 15,000 jobs jin 2016, the wind industry now supports a record-high 102,500 jobs in all 50 states. There’s now enough wind to power 24 million typical American homes, says AWEA. The report also noted that more than 99 percent of wind farms are built in rural communities and that wind now pays over $245 million per year in land-lease payments to local landowners, mostly farmers and ranchers.
– via North American Windpower |
information on gold, flotation, mineral processing, carbon in leach plants
132 chapter 4 and resumed work in some of the silver mines in the santa rita mountains, introducing the amalgamation method of processing silver ore with mercury.
here is a list of the 10 biggest silver producing mines in the world, based on available 2010 data.
data mining and information about basic techniques on the mining industry and its diverse ways on the basic techniques of mineral ore processing and the basic technology mineral processing ores gold, silver, copper
medieval mine • mining machine from the 16th century • medieval technology of mining and minting • miners' settlement. the tour introduces the whole process through which the silver ore had passed until a silver coin was struck.
manuka resources navigation. home; lead and silver mine, shortcomings in the feasibility study became apparent once mining and processing had started,
senior process engineer mining companies merging into large international mining "a new age gold plant flowsheet for the treatment of high grade ores"
the process design of gold leaching and carbon in pulp circuits 14 january/february 1999 the journal of the south african institute of mining and metallurgy figure 2—the carbon in pulp (cip) process
water consumption at copper mines in arizona silver bell3 tucson ama groundwater 1,156 928 as mining and processing rates change and as market
miners warehouse specialises in the supply and distribution of mining equipment to the small scale artisinal mining industry. our range of equipment encompasses the entire requirement that a small scale mine will require from primary ore extraction including drilling and blasting as well as ore haulage and movement to ore processing, with high
visiting butte; history & culture; butte began in the late 1800s as a gold and silver mining the butte mines and associated smelting and processing
mineral resources and mining. aluminum processing plants are located in about 1885 silver mining began in the van horn allamoore district in hudspeth and
silver mine mine planning. these plans included detailed underground and open pit mining plans, processing plant engineering plans and all the necessary
african american artist grafton brown prepared this map of mining claims in the comstock mining district during pan process to unlock the stubborn silver
oregon mines are fun to explore without a lot of i'll get to some of the silver and gold mines as i wander the mine processing building i have been
talking about a mine in the lithium mining site, but in mining lithium a slightly different process is preffered. to mid level brines like silver peak,
california mines. elephant eagle gold mine, ca. rawhide mine gold and silver bearing gossans were originally mined in these districts during the 1860s.
mines and mineral ores mining processing machine . sam company has much experience in mining process, silver is can be obtained through the beneficiation
these major mines are economic engines for regions that have few employment greens creek mine silver, zinc, currently in the permitting process;
the silver peak mining district which contained a large percentage of base metal sulfides was very difficult for the miners to process and extract the silver. |
Wind power has been increasing its weight in the global energy production portfolio, representing 487 gigawatts (GW) in production capacity as of the end of 2016. In actual production terms, wind power accounted for ~5% of the total electricity produced in 2016, based on data from 44 countries. Although still a marginal portion of the electricity production capacity, the wind power installed base has been growing quickly over the last few years (see below chart).
Wind Power Global Capacity and Annual Additions, 2006-2016
Source: REN21 Renewables 2017 Global Status Report
Estimates indicate that cumulative wind power related investments could represent $3.3 trillion by 2040, an investment compounded annual growth rate of 3.4% over the period. It is also expected that the levelized cost for new electricity from onshore and offshore wind would decrease by 47% and 71% over that same period, respectively.
Even though the international community agrees the renewable energy transition must accelerate, the investment priority between various renewable sources is still a sensitive topic. Indeed, wind power advocates face various critics regarding wind turbines’ negative impact on wildlife, the uncertainty around recycling, bothersome noise generation as well as the reliability of the minimum wind speed required to make a wind project economically viable.
We met with Semtive, a startup designing a vertical-axis wind turbine (VAWT) for the residential and commercial/industrial markets which solves most of the issues above mentioned. Indeed, the NEMOI wind turbine, both model M and S, does not harm the local fauna thanks to its horizontally rotating blades whose trajectory is perceived by birds and bats, thus preventing any collision risks. Also, NEMOI is 95% made of recyclable materials, only the paint and the bearings’ oil are not recyclable today. Finally, Semtive’s turbines are completely noiseless at any wind speed and start generating electricity at wind speed as low as 1.8 kmh, ideal in a urban environment. Semtive’s latest versions of model S and M produce 600W and 1,600W in nominal terms, respectively. Considering the average electricity consumption of an household in 2014 in Europe and the US, the Nemoi S and M could respectively supply 70% and 57% of the total electricity consumed by an average European and US household on a nameplate basis.
NEMOI – Product characteristics
Source: Semtive website
Beyond the environmental impact, Semtive also chose to take a social and economic commitment to the regions where its products are sold by manufacturing the NEMOI locally. Thus, the company also manages to reduce import tariffs and shipping costs, optimizing its product cost structure and carbon footprint.
Additionally, Semtive’s products can be installed in one hour by a single person and with one tool, has a warrantied life cycle of 40 years (could reach 60 years), does not require any maintenance with only two moving parts, and has an average ROI of ~3 years. In case of dangerous wind conditions such as hurricanes or tornadoes, the blades can be removed in under 15-20 minutes. Also, Semtive’s controller, which the company started developing a year ago, is simply plugged into an electrical socket like any other appliance, enabling a real time monitoring of the electricity production and usage from a mobile app. The controller also allows the user to connect other renewable energy devices such as solar panels or storage devices and to select which energy source is best to use at a given time, when to store electricity or to sell it to the grid.
The company understood that the future of energy lies in decentralization and technology, which fostered the inclusion of a blockchain based system enabling users to buy NEMOI electricity from each other via a microgrid.
Semtive’s NEMOI represents a highly long term cost-effective and sustainable power generation solution for the residential and commercial/industrial markets, satisfying both i) end customers, with lower electricity bills and a reduced environmental footprint, and ii) utilities, with innovative blockchain-enabled opportunities.
Fifth generation turbine characteristic and currently undergoing UL certification. Information from brochure available online correspond to the Semtive’s fourth generation model.
Based on an average wind speed of 11 m/s or 24.6 mph
The nameplate or nominal basis does not factor in the capacity factor which determines the actual output generated by the system in real life condition
Based on a $4,695 Nemoi M price tag and dependent on wind conditions and electricity consumption habits |
You are here
Project Management Fundamentals for Library Staff
An Infopeople Online Learning Course
NOTE: Because this course falls over the December holidays, the end date for this course has been extended until Jan 1, 2018 to provide students additional time to complete course work.
Course Instructor: Emily Clasper
Do you find yourself in charge of coordinating projects at your library? Would you like to hone your project planning and execution skills to make your projects more efficient and successful? Do you want to become a more effective project team leader?
By tapping into the principles of Project Management developed in the corporate world, library staff can learn to manage projects more consistently and efficiently. This course will introduce the basic principles of formal Project Management, and cover several factors critical to project success within a library environment.
In this course, we will focus on:
- The fundamental principles of Project Management
- Establishing project goals
- Project Planning strategies and techniques
- Managing change
- Planning for effective communication
Course Description: This four week course will use readings, videos, written assignments, and online discussions to introduce participants to the basics of Project Management as they apply to a typical library environment. Assignments will be shared with all participants, who will be encouraged to give their colleagues feedback on their work and observations. You will be asked to share your reactions to the supplemental materials and discuss the course content via the course forums.
Course Outline: When you log in to the Infopeople online learning site, you will see weekly modules with these topics:
- Week 1 Topic: The Foundations of Project Management
- The purpose of following a formalized Project Management process
- General Project Management terms and concepts
- The Project Lifecycle
- Becoming an effective Project Manager
- Project Management within the world of libraries
- Week 2 Topic: Beginning the Planning Process
- Setting Project Goals and Objectives related to the library's overall mission and strategic plan
- Defining Project Deliverables and Outcomes
- Taking stock of Project Stakeholders and their roles
- Developing a Communication Plan
- Week 3 Topic: Finishing the Plan and Getting to Work
- Creating an Activity Register and Work Breakdown Structure
- Developing a Project Schedule
- The Formal Project Plan
- Project Documentation
- Project Execution Tips
- Week 4 Topic: Creating a Sustainable Project Management Program
- Finishing out the Project Lifecycle
- Keeping everyone Engaged and Accountable
- Making Project Management Processes Stick
- Project Management Software
Pre-course Assignment: None
Time Required: To complete this course, you can expect to spend 2 ½ hours per week, for a total of ten course hours. Each week's module contains readings and various options for assignments, discussions, or online meetings. You can choose the options most relevant to your work and interests. Although you can work on each module at your own pace, at any hour of the day or night, it is recommended that you complete each week's work within that week to stay in sync with other learners.
Who Should Take This Course: Librarians, staff, and administrators who plan and lead team projects of any scale within a library environment, as well as those who plan to do so in the future.
Online Learning Details and System Requirements may be found at: infopeople.org/training/online_learning_details.
Learner Requirements: Word processing software, spreadsheet software.
After the official end date for the course, the instructor will be available for limited consultation and support for two more weeks, and the course material will stay up for an additional two weeks after that. These extra weeks give those who have fallen behind time to work independently to complete the course.
Keywords: Project management |
Cyrus McCormick, in full Cyrus Hall McCormick (born February 15, 1809, Rockbridge county, Virginia, U.S.—died May 13, 1884, Chicago, Illinois) American industrialist and inventor who is generally credited with the development (from 1831) of the mechanical reaper.
McCormick was the eldest son of Robert McCormick—a farmer, blacksmith, and inventor. McCormick’s education, in local schools, was limited. Reserved, determined, and serious-minded, he spent all of his time in his father’s workshop.
The elder McCormick had invented several practical farm implements but, like other inventors in the United States and England, had failed in his attempt to build a successful reaping machine. In 1831 Cyrus, aged 22, tried his hand at building a reaper. Resembling a two-wheeled, horse-drawn chariot, the machine consisted of a vibrating cutting blade, a reel to bring the grain within its reach, and a platform to receive the falling grain. The reaper embodied the principles essential to all subsequent grain-cutting machines.
- Created on .
- Last updated on .
- Hits: 174 |
Previously published in Plastics Engineering and posted with permission from the Society of Plastics Engineers.
NOTE: figures in this article on rigid plastics and plastic film recycling have been updated since publication in Plastics Engineering magazine.
Plastics recycling is growing. Steadily, broadly, expansively.
It’s useful every once in a while to remind ourselves why we recycle anything at all. Imagine for a moment the long and winding path of something as commonplace as a plastic milk jug: from natural resources to petro/chemical facilities to plastics production to blow molding to filling, shipping, merchandizing, purchasing, and (finally) enjoying. Does it make sense to simply discard all that? Materials often have value even after we use them, so burying them in a landfill is an egregious waste of resources.
Furthermore, recycling can reduce energy use and cut greenhouse gas emissions. According to EPA, recycling combined with composting (EPA often lumps these together) saved “the same amount of energy consumed by almost 10 million U.S. households in a year” and reduced greenhouse gas emissions the equivalent of removing more than 33 million passenger vehicles from the road in a year. Plus recycling industries can create significantly more jobs than simply hauling and burying garbage, says EPA.
Looking specifically at plastics, a 2010 study found that recycling HDPE and PET plastics can save enough energy each year to power 750,000 homes. And recycling HDPE can reduce greenhouse gas emissions 66 percent compared to using virgin HDPE.
So … combine energy savings, reduced waste and greenhouse gases, and more jobs, and recycling sounds pretty smart. Thankfully, recycling in the U.S. continues to grow—according to the EPA, the U.S. recycling rate has more than doubled since 1990.
The rate for plastics recycling has grown even more than that, in part because recycling these newer materials began in earnest only in the 1990s. Companies that make plastics invested billions of dollars over the past few decades to help set up the recycling infrastructure (the topic of a future Plastics Make it Possible® article). So today the plastic bottle recycling rate is approaching the rate for glass.
While there still is a long way to go catch aluminum and steel recycling rates, here’s a quick look at the success of recycling for some common plastics based on the latest tracking data … and advances that may well increase the momentum.
The U.S. recycling rate for plastic bottles reached nearly 32 percent in 2014. Plastic bottle recycling grew by 97 million pounds to top 3 billion pounds for the year. That marks the 25th consecutive year that Americans have increased the pounds of plastic bottles collected for recycling (surveying began in 1990).
The collection of polypropylene bottles, specifically, jumped more than 28 percent to reach a recycling rate of nearly 45 percent, higher than the collective recycling rate for glass beer and soft drink bottles.
The recycled resins made from plastic bottles are used widely in plastic products and parts, from clothing fabrics to auto components to new bottles.
Non-bottles or “rigids”
Rigid plastics represent a category of non-bottle plastic containers along with caps and lids. Nearly 1.3 billion pounds of rigids were collected for recycling in the U.S. in 2014. That’s quadruple the amount collected in just 2007 (when measuring began).
In the U.S., these plastics are recycled primarily into automotive parts, crates, buckets, pipe, and lawn and garden products.
Instead of being collected at curbside like bottles and rigids, plastic bags (e.g., for groceries, food/produce, newspaper delivery, dry cleaning), and plastic overwraps for products (e.g., beverage cases, diapers, napkins) are collected for recycling at more than 18,000 grocery and retail stores across the U.S.
Even though this at-store collection program is relatively new, recycling of this postconsumer plastic “film” packaging reached nearly 1.2 billion pounds in 2014. Plastic film recycling has increased 79 percent since 2005 (when measuring began) and has reached a rate of 17 percent.
Recycled plastic film is used to make a range of products, including durable composite lumber for outdoor decks and fencing, home building products, lawn and garden products, crates, pipe, and film for new plastic packaging.
Access to curbside and drop-off recycling programs for foam polystyrene packaging is growing across the U.S. The EPS Industry Alliance reported an all-time high recycling rate of 34 percent for foam polystyrene protective packaging for 2103.
To help increase recycling, a new interactive website allows Americans and Canadians to search for local foam packaging recycling programs. PSFoamRecycling.org differentiates between programs that accept protective foam packaging (typically used for transporting electronics and other high-end products), programs that collect foam food packaging (such as coffee cups, clamshell containers, egg cartons, and meat trays), and programs that collect both types of packaging. The site also notes whether the foam packaging is collected at curbside or drop-off programs and identifies foam packaging “mail back” programs for areas where local recycling does not exist.
Recycled polystyrene is used to make numerous products, from picture frames to crown molding to egg cartons.
New Recycling Fund and Plastics Recycling Facility
To help increase the momentum of recycling, 10 of the largest consumer products companies (e.g., P&G, Walmart, Coca-Cola) have created the $100 million Closed Loop Fund. The Fund provides zero and low interest loans to cities and companies that want to build new recycling facilities and projects for plastics and other materials. By 2025, the fund aims to eliminate more than 50 million tons of greenhouse gas, divert more than 20 million tons of waste from landfills, and create more than 20,000 jobs.
Its first project opened in late 2015, a high-tech recycling facility in Baltimore that is able to sort 54,000 tons of plastics for recycling each year, including some that today are not recycled often. One of the largest facilities of its kind, The QRS Plastics Recovery Facility is expected to collect plastics within a 500-mile radius across the East Coast and process approximately 4,500 tons of materials each month, more than double what is currently possible in the U.S., according to its backers.
The facility uses technologies that can make it more economical to sell these plastics in the market, which could considerably increase the amount of plastics recycled. Advanced, high-tech optical sorters “read” different types of plastics and then send puffs of air to blow specific items into the correct stream. These streams of plastics that have been separated according to type can be much more valuable than streams of mixed plastics. The facility also plans to process the plastics back into the raw material (plastic pellets) for sale.
The combination of funding source, advanced technologies, and pellet processing are designed to make broader plastics recycling more cost-effective. According to the Closed Loop Fund, facilities like this could be replicated across the nation—and beyond.
The momentum in plastics recycling is encouraging and helping all of us reduce our environmental footprint. Let’s keep it up.
For more information on plastics recycling, visit plasticsmakeitpossible.com |
It is not ostensibly a case of lack of funds nor was it a case of wilful neglect, but by the 1840s, despite Port Elizabeth’s harbour exceeding Cape Town for exports, it still operated directly from the beaches. The so-called landing beaches stretched along the beach from Jetty Street to the mouth of the Baakens River.
The loading and unloading vessels at anchor in the Bay has been dealt with in a prior blog. Instead this article, deals with the management of the vessels in the Bay.
Main picture: Vessels at anchor in Algoa Bay
Without question, the operation of the “harbour” in the 1840s was archaic by nineteenth century standards. The clamour for the erection of jetties was persistent and insistent yet these calls fell on deaf ears. It stands to reason that the lack of jetties delayed the process of loading and unloading vessels to such an extend that the turnaround time in the Bay could extend to as much as a month. It is little wonder that ship owners clamoured for a more productive method of operation.
In order to systematise the harbour operations, on the 6th February 1844, the Governor, John Montegu, issued the proclamation entitled “Notice to Mariners. Fort Instructions for Algoa Bay.”
These read as follows: Should it be the intention of the master of a vessel to discharge or receive on board any considerable quantity of cargo, a convenient berth will be pointed out by the Port Captain, as close to the landing-place as the safety of the vessel and other circumstances will admit.
The vessel must then be moored with two bower anchors, with an open hawse to the South East and especial care taken not to overlay the anchors of other vessels, or in any way to give them a foul berth. Ships or vessels touching for water and refreshments, may ride at single anchor, but they must then anchor well to the northward, so as to prevent danger (in case of drifting) to the vessels moored; and it is particularly recommended, when riding at single anchor to veer out 70 or 80 fathoms of chain; the other bower cable should be ranged and the anchor kept in perfect readiness to let go; strict attention should be paid to keep a clear hawse, (when moored), the more so when it is probable [that] the wind may blow from the S.E., and whether at single anchor or moored, the sheet anchor should be ready for immediate use.
The situation of the vessel must be taken by land-marks, and the depth of the water, and should any accident occur by which she may drift from such [a] situation, or lose her anchors, the same must be notified in writing to the Port Captain.
It is recommended that vessels be kept as snug as possible; especially such as may have to remain some time in the anchorage, for the periodical winds blow occasionally with much violence
Vessels having Marryat’s Code of Signals, can make their wishes known to their Agents, in blowing weather, through the Port Office. Vessels not having the Code, can make the following with their Ensigns:
1st. Ensign in the Fore Top-mast Rigging – I am in want of a Cable
2nd. 1st. Ensign in the Main Top-mast Rigging – I am in want of an Anchor
3rd. Ensign in the Fore Rigging – I have parted a bower cable
4th. Ensign in the Main Rigging – I am in want of an Anchor and a cable
5th. Whift, where best seen – Send off a boat
Whenever a red flag may be hoisted at the Port Office, it denotes that it is unsafe for any Boat to land.
(Signed) H.G. DUNSTERVILLE. Port Captain, Approved
By Command of His Excellency, the Governor
(Signed) JOHN MONTEGU
Sec. to Government
Colonial Office, 6th February, 1844.
Bower anchors: each of two anchors carried at a ship’s bow, formerly distinguished as the best bower (starboard) or small bower (port).
Open hawse: an arrangement of starboard and port anchor cables in which the cables run directly to the anchors
Sheet anchor: an additional anchor for use in emergencies.
Whift: noun obsolete. Meaning a brief emission of air or a hint of a sound or smell.
The Eastern Province Directory and Almanac for 1848 (1848, Godlonton & White, Grahamstown) |
Desk research, known also as secondary research, means analyzing existing data collected in previous research, that is to say it screens and then analyzes secondary data. This term is used in opposition to field research for which data are collected from real operations such as focus groups, one-to-one interview, experiments. Desk research is usually regarded as the first step of a marketing research.
A successful desk research relies on the sources from existing data. There is a variety of sources which can be classified as the following ones:
– Internal information of a company (shipping list, financial statements, sales notes, client lists, etc.)
– Government’s data: one of the most important responsibilities of resident office of each contry is to promote mutual commercial communication, to collect market information. Through these resident offices, we can be aware of the market information , such as trade statistics and the import-export directory, etc.
– The data published by international organisations, trades union and chamber of commerce. For instance, World Trade Organisation provides researches of special goods and each country’s market studying, etc. Many trades union collect or publish regularly informations about its products. Chamber of commerce provides list of people and enterprises.
Moreover, we can get secondary data from banks, market research companies, unions of consumers, libraries, etc.
Facing so much information, one should know how to collect what is useful for his research and what helps to obtain his target. Then it is the concern about the procedure of desk reseach. Firstly, we should test and verify these data, for its origin varies a lot and not every information is related to our studying. Totally, we can consider the following five concepts when invaluate the secondary data.
– Content: whether the existing data is complete or not, whether it meets the requirements of the treated subject or not.
– Level: whether the existing data is professional or not
– Time validity: whether the existing data is invalidated
– Accuracy: whether the existing data is credible or not
– Facility: what cost and time it spends to have access to the data
Through a desk research, secondary data are seen as treated target, we should be logic in choosing these information so that the studying results turn to be reasonable. Generally speaking, information collection ranks from normal one to special one. During the course of research, the project assistant knows better where to go and what to be done in his further research. After the report finished as the last step of the research, the desk research is almost over. However, in most times, the field research is needed and is widely accepted as a nice tool to improve the management level. So, it is compulsory to use field research when desk research can’t help to reach some objectives.
To know more on China Market Research
Written by Pu Dan from Daxue Consulting China |
Importance of Job Analysis
Job analysis helps in analyzing the resources and establishing the strategies to accomplish the business goals and strategic objectives. Effectively developed, employee job descriptions are communication tools that are significant in an organization’s success.
The main purpose of conducting job analysis is to prepare job description and job specification which helps to hire right quality of workforce.
Job Analysis can be used in training to identify or develop, training content, and assessment tests to measure effectiveness of training, equipment to be used in delivering the training and methods of training.
Job Analysis can be used in compensation to identify or determine: skill levels, compensable job factors, work environment, responsibilities and required level of education.
Job Analysis can be used in selection procedures to identify or develop job duties that should be included in advertisements of vacant positions, appropriate salary level for the position to help determine what salary should be offered to a candidate, minimum requirements for screening applicants, interview questions, selection tests/instruments (e.g., written tests; oral tests; job simulations), applicant appraisal forms and orientation materials for new hires
Job Analysis can be used in performance review to identify or develop goals and objectives, performance standards, evaluation criteria, length of probationary periods, and duties to be evaluated
An ideal job analysis should include
Duties and Tasks: The basic unit of a job is the performance of specific tasks and duties. This segment should include frequency, duration, effort, skill, complexity, equipment, standards, etc.
Environment: This segment identifies the working environment of a particular job. This may have a significant impact on the physical requirements to be able to perform a job.
Tools and Equipment: Some duties and tasks are performed using specific equipment and tools. These items need to be specified in a Job Analysis.
Relationships: The hierarchy of the organization must be clearly laid out. The employees should know who is under them and who they have to report to.
Requirements: The knowledge, skills, and abilities required to perform the job should be clearly listed.
There are several ways to conduct a job analysis, including: interviews with incumbents and supervisors, questionnaires (structured, open-ended, or both), observation, critical incident investigations, and gathering background information such as duty statements or classification specifications.
It is important for organizations to hire the right candidates who suit their work environment and requirements otherwise they will end up stagnating. It also important for the job seekers to pick up a job that suits their personality and interest as the first step will play a deciding role in shaping their career and position in life. This can be possible only when job seekers and organizations are able to communicate their requirements to each other. |
Thursday, 25 September 2014
Landmark shale gas study shows no groundwater problems
One of the difficulties in the current shale gas debate is that good data is hard to come by. Operators collect lots of data from around their sites, including water sampling to test for pollution, and geophysical monitoring to track where the fractures went during stimulation. However, this data is often considered commercially sensitive, so it rarely sees the light of day.
A government-sponsored project would be very useful, because it would provide a test-bed for an extensive monitoring program. All data could then be made public, and the claims of all those involved in the shale gas debate openly tested.
This is exactly what has been happened in the USA, with the final report released this week. The US National Energy Technology Lab (NETL) sponsored a monitoring program at a hydraulic fracking operation in Greene County, Pennsylvania. The monitoring program consisted of 2 parts: microseismic monitoring to track the fractures created by the stimulation, and geochemical sampling in overlying layers to test whether any contamination has occurred. Most importantly, because the data is publicly available, it's a great opportunity to talk through the anatomy of hydraulic stimulation.
The first stage of shale gas extraction is to drill horizontal wells through which the fracking will be done. The figure below shows a map of the lateral wells drilled. Those in the yellow box were the 6 wells that made up the NETL study.
The figure below shows a stratigraphic column of the geology in the study area. The Marcellus shale is at a depth of 8,000 feet. Overlying the Marcellus at a depth of 2,000-4,500 feet are Upper Devonian age rocks that contain conventional natural gas reservoirs. These reservoirs have been exploited for conventional gas, so wells penetrating these formations are available, and have been used for geochemical monitoring. Freshwater aquifers are found at depths of less than 1,000ft.
I've heard over and over again that apparently shale rocks in the USA don't have faults in them (one controversial geologist in particular springs to mind). Seismic reflection surveys show clearly that the Marcellus rocks in this area are in fact substantially faulted.
I'll cover the geochemical aspect of the monitoring program first. Geochemical tracers were injected with the hydraulic fracturing fluids. Fluid samples were taken from the overlying Upper Devonian layers, and analysed to look for the presence of these tracers, which would indicate upwards fluid migration via natural and/or hydraulic fractures. The figure below shows the monitoring well depths in relation to the shale gas operation.
I won't go into detail on the geochemical evidence, suffice to say that no evidence for upward migration of gas and/or fracking fluid was found. This concurs with the study discussed in my last post: there is no evidence that fluids can migrate from 2km down to the surface - if contamination is occurring, it occurs via faulty well bores, not through fractures in the rock.
Of more interest to me is the microseismic data, because it allows us to see what hydraulic fracture stimulation looks like. Geophones were placed in the monitoring wells that extend to depths close to the reservoir. These pick up the "microseismic events" - the pops and cracks as the fractures open up, allowing us to track where the fractures have gone.
The figure below shows the resulting microseismic map. Each blue dot represent a microseismic event - we can connect them together to see where the hydraulic stimulation fractured the rock.
The first aspect to note is that all the events are close to the well. The hydraulic fractures rarely extend more than 200-300m from the well, and we can see that to be the case here.
Microseismic event magnitudes range between -3.0 to -0.5. This is a typical range for hydraulic stimulation, and is below what could be detected even with a seismometer placed on the surface, and orders of magnitude below what people can typically feel, which is approximately magnitude 2.0.
This is interesting, because as we have seen from the reflection seismic, the rocks here are faulted. We constantly hear about how fracking in faulted rocks will inevitably lead to large earthquakes, water contamination and not much short of the end of the world. In fact, it is rare for a fault to be critically stressed such that it will trigger large events. Service providers have estimated that they see evidence for fracture-fault interaction in about 30% of stimulations, yet induced seismicity such as that seen during Cuadrilla's operations at Preese Hall are very very rare (with 1 case in the UK, and 3 cases in North America).
You can see from the microseismic that one of the stimulations did interact with one of the faults. It can be seen in the cluster of events to the left of the above figure. The figure below shows the microseismic events in more detail, and the fault interaction can be seen in the red-coloured events to the top left.
The next plot shows the same events in a map view. You can see how the events line up to demarcate the re-activated fault.
In summary - I often hear how the faulted UK geology will render hydraulic stimulation impossibly dangerous in the UK, whether because it will trigger large earthquakes or contaminate water supplies. This report shows hydraulic stimulation in the Marcellus shale, with lots of faults in evidence. The microseismic data shows that the fractures interacted with the faults. However, no large seismic events were triggered, and extensive geochemical monitoring showed no evidence of fluid and/or gas migration into shallow layers. |
Why America Needs Nuclear Energy
Question: How would nuclear power work on a large scale?
James Hansen: Well, nuclear power -- the kind of nuclear power we have now is called second-generation nuclear power. It's comparable in cost to coal. Once you have the nuclear power plant, then the fuel is very inexpensive, so nuclear power is quite inexpensive. But it's difficult in the United States to get a nuclear power plant built, and it takes so many years that it drives the cost up. So now in England they've realized that they will need to have nuclear power in the future, so they've put a limit -- once a government commission decides on where the power plants will be built, the public will have one year to object to this and possibly get some changes. But they can't drag it out six or seven years, the way it happens in the United States, because that drives up the price tremendously.
And there's also the possibility for fourth-generation nuclear power. That's a technology which allows you to burn all of the nuclear fuel. Presently, nuclear power plants burn less than 1 percent of the energy in the nuclear fuel. Fourth-generation nuclear power allows the neutrons to move faster, so it can burn all of the fuel. Furthermore, it can burn nuclear waste, so it can solve the nuclear waste problem. And the United States is still the technology leader in fourth-generation nuclear power. In 1994, Argonne National Laboratory, now called Idaho National Laboratory, was ready to build a fourth-generation nuclear power plant, but the Clinton-Gore administration canceled that research because of the antinuclear sentiments in the Democratic Party. Well, we still have the best expertise in that technology, and we should develop it because it's something we could also sell to China and India, because they're going to need nuclear power. They are not going to be able to get all of their energy from the sun and from the wind.
Question: What is the most effective approach to alternative energy?
James Hansen: Well, the most effective one is energy efficiency. We waste a lot of our energy. We can get vehicles that get more miles per gallon. There are many ways to improve energy efficiency. In fact, some states are twice as efficient as other states, just because -- fossil fuels were so cheap we just didn't pay attention to how effectively we were using them. But in addition, there are renewable energies: solar energy, wind energy. And I think that nuclear power has to be part of the solution, because at this time it's the only alternative to coal for base-load electrical power. And we do now have the technology for much safer and more efficient nuclear power, as compared to the old versions that were used in the past several decades.
The head of NASA’s Goddard Institute explains fourth generation nuclear power, and harnessing this technology will be pivotal for America’s future.
Swipe right to make the connections that could change your career.
Swipe right. Match. Meet over coffee or set up a call.
No, we aren't talking about Tinder. Introducing Shapr, a free app that helps people with synergistic professional goals and skill sets easily meet and collaborate.
A growing body of research shows promising signs that the keto diet might be able to improve mental health.
- The keto diet is known to be an effective tool for weight loss, however its effects on mental health remain largely unclear.
- Recent studies suggests that the keto diet might be an effective tool for treating depression, and clearing up so-called "brain fog," though scientists caution more research is necessary before it can be recommended as a treatment.
- Any experiments with the keto diet are best done in conjunction with a doctor, considering some people face problems when transitioning to the low-carb diet.
Even when they suffer costs in doing so.
- It's commonly thought that the suppression of female sexuality is perpetuated by either men or women.
- In a new study, researchers used economics games to observe how both genders treat sexually-available women.
- The results suggests that both sexes punish female promiscuity, though for different reasons and different levels of intensity.
Thinking your life is worthwhile is correlated with a variety of positive outcomes.
- A new study finds that adults who feel their lives are meaningful have better health and life outcomes.
- Adults who felt their lives were worthwhile tended to be more social and had healthier habits.
- The findings could be used to help improve the health of older adults.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved. |
Apr 3, 2014 The main ores of iron usually contain Fe2O3 70 % iron, hematite or Fe3O4 Also for effective beneficiation treatment, effective crushing, grinding, and A typical flow sheet for iron ore beneficiation plant is shown in Fig 1.
A process for upgrading low-grade magnetite-containing iron ore with minimum fine 2,962,231 111960 Weston ..24124 CRUDE JAW CRUSHER GYRATORY According to present beneficiation processes the ore is crushed and usually
Rio Tinto Iron Ores low-grade ore beneficiation plant in the Pilbara was For example, magnetite iron ore containing only about 4% Fe beach sands or Crushing and screening is typically the first step of iron ore beneficiation processes.
Beneficiation of low grade ores is the process of increasing the grade of a mineral through unit determine the yield of magnetite from those ores Connelly, 2009. Crushing and screening can result in an upgrading of the iron ore. The fines
After 20 years practice, Xinhai Magnetite Separation Production Line reduces the production line cost by Ilmenite Beneficiation Plant in Ecuador, a republic in northwestern South America, is one of our successful cases. Jaw Crusher.
Mineral Processing > Engineering Design > Training > Specialist Services Similar to the typical fixed roll crusher. • Force applied to HPGR and Cone Crusher Product Low-grade magnetite to produce a high-grade premium concentrate.
Mineral processing, art of treating crude ores and mineral products in order to As a rule, comminution begins by crushing the ore to below a certain size and . With good results, strongly magnetic minerals such as magnetite, franklinite, and
Sep 27, 2012 magnetite is the principal iron mineral, the rock is called magnetic taconite; if hematite Processing of taconite consists of crushing and grinding the ore to liberate .. Standards of Performance for Metallic Mineral Processing.
Jul 7, 2017 “The waterless separator achieves this grade of magnetite, assuming While the demonstration system can process up to 170 kgh of magnetite, Kelsey Imptec superfine crusher to provide cost-effective particle size reduction to a and separate magnetic iron ores during beneficiation, which results in a
Keywords: flowsheet assessment; iron; mineral processing plant; simulation. Procjena dijagrama toka and optimization of crushing and grinding equipment in the plants caused mainly composed of magnetite and hematite. The parameters |
Buildings presently account for approximately 40% of the world’s energy consumption, and that figure is on the rise. Experts project that energy consumption in buildings will increase substantially in the world’s most populous and fastest-growing countries, such as China and India. Beyond energy use, buildings also are responsible for nearly half of all greenhouse gasses, specifically carbon dioxide.
Building owners and facility managers are facing increasing pressures to reduce their energy consumption as national and local governments adopt more stringent sustainable energy policies. Regulations in some countries will even require by 2025—and in some cases by 2020—that all new buildings are neutral (a.k.a., net-zero) or even positive with regard to energy—which means that the building will have to produce at least as much as energy as it consumes.
How can this be accomplished? There are two separate but complementary approaches to reducing building energy consumption:
- Implementing energy efficiency measures
- Integrating renewable energy sources
To achieve optimal results and optimize investment, building energy efficiency measures should be considered first. This is especially true for existing buildings, where investments usually are made progressively over time.
For new buildings, the net-zero energy consumption requirement is specified in the early stages of the project. With such a goal clearly in mind, a building can be designed from the beginning to be net zero, ensuring that the building can incorporate renewable energy sources and will support active energy management systems and effective building operation.
BOOSTING ENERGY EFFICIENCY
Energy efficiency measures also fall into two categories, passive and active.
Passive energy efficiency measures simply avoid the unnecessary use of energy. One example of a passive energy efficient measure is switching from conventional light bulbs to energy-saving lighting such as halogen incandescent, compact fluorescents (CFL), and LED lightbulbs, which produce the same amount of light but use less energy.
Active energy efficiency is about taking the control of the energy use. This type of energy efficiency measure typically requires continuous monitoring—using power measurement devices and cloud-based or on-premises power monitoring software—and active management, including an action plan and following up on results. For more detailed information on this approach, see our white paper, “Making permanent savings through active energy efficiency.”
Figure 1 shows how energy efficiency measures—both passive and active—in one existing U.S. office building have combined with the integration of renewable energy to dramatically lower energy use since 2006.
INCORPORATING RENEWABLE ENERGY
Combined passive and active energy efficiency measures can significantly reduce a building’s energy consumption, and consequently the energy bill. But to become neutral or positive with regard to energy, it is essential to integrate clean, local energy sources. A future post will explain in more detail the various renewable energy technologies available for use in buildings. As of now, however, just know that photovoltaic (PV) is the main technology for buildings on the road to net-zero energy consumption. Even today a building’s PV installation may very often have the potential to fully cover its electrical energy needs.
Several factors determine the PV system size and capacity. The area available for the PV installation is one large factor in this regard. Frequently this consists primarily of the building rooftop. In existing buildings, the rooftop may be only partially available for PV panel installation. For example, there may already be equipment installed on the rooftop, or the building structure may not be sized to support the weight of the PV system. In other cases, the rooftop orientation or the building environment may not provide optimal conditions for maximal solar irradiance (i.e., how much energy you can count on being available). Having a car parking area on the building site represents a very large potential for installing additional PV panels. In certain types of structures, a building-integrated PV (BIPV) system with solar panels integrated into the façade or other parts of the building envelope may be an option. In such configurations, PV installation is integrated by design.
Other important PV system sizing factors include:
- the geographical location, which determines the solar irradiance,
- the building usage and type, and
- its energy consumption.
The way that the PV energy is used—self consumption or grid injection—does not play a role in the evaluation of the building energy. Indeed, most existing net-zero energy buildings export their energy to the grid.
A STUNNING NET-ZERO ENERGY BUILDING
Completed in 2014, Deloitte Netherlands’ The Edge headquarters building in Amsterdam is a model of sustainability. In addition to having a large rooftop, building-integrated PV system that covers the electrical needs of the building, the Edge is packed with sensors. These allow the highly integrated building systems to provide light, shade, ventilation and so on, all based on real-time data.
Such system integration is an essential part of making any building energy neutral. For The Edge, designers turned to Schneider Electric’s EcoStruxure™ and SmartStruxure™ controls and systems to intelligently link smart electrical distribution and field devices.
Read our case study to learn more about how The Edge is achieving industry-leading performance.
The next post in this series will be, “Power Up Your Next Building With These Renewable Energy Technologies.” |
SolarReserve Banks on Storing Heat from the Sun
SolarReserve Banks on Storing Heat from the Sun
One of the knocks against renewable energy has been that, unlike coal, natural gas and nuclear energy, it is intermittent. Solar and wind power can’t be relied upon when the sun doesn’t shine or the wind doesn’t blow.
SolarReserve, a well-financed California startup that can store energy from the sun, wants to change that.
With the help of a $737 million loan guarantee from the U.S. Department of Energy [PDF, download], SolarReserve intends to begin construction this summer on Crescent Dunes, a 110-megawatt (MW) solar thermal project in Tonopah, Nev. It will store the sun’s heat in molten salt, a mixture of sodium and potassium nitrate that, the company says, is an efficient and inexpensive medium in which to store energy. This enables the plant to deliver a steady stream of electrons to customers of NV Energy, the Nevada utility that has signed a long-term agreement to buy electricity from Crescent Dunes.
I talked about the molten salt technology by phone last week with Kevin Smith, Solar Reserve’s CEO. We also discussed the economics of solar thermal and the rationale for federal subsidies of clean energy.
“At elevated temperatures, say, above 500 degrees, these salts have the consistency of water,” Smith told me. “The molten salt looks like water. It’s clear. The advantage of it is that we can heat it up, and maintain the salt in the 500 degrees to 1,050 degrees, and even at the high temperature, it’s still a liquid. We store it in a tank at atmospheric pressures. And then, when we want to generate electricity, we generate steam that goes through a conventional steam turbine.”
The molten-salt storage technology that sets Solar Reserve apart from competitors has a long history, even though the company itself is less than five years old.
Back in the 1970s, the DOE funded a pilot solar thermal project called Solar One in the Mojave Desert, which used thousands of individual sun-tracking mirrors (called heliostats) to reflect solar energy onto a central receiver located on top of a tall tower -- the same basic solar-thermal design now being built by SolarReserve. That plant operated for four years in the 1980s. It was modified in the 1990s to become Solar Two, which began operation in 1996, using the molten salt technology to store heat. (Here’s a cool satellite image of the site.) Solar Two was shuttered in 1999. “There wasn’t much of a solar market out there,” Smith said.
By the mid-2000s, that had changed. So the company that developed the technology for Solar Two, called Rocketdyne, which is now part of the Pratt & Whitney division of United Technologies, formed a partnership with the U.S. Renewables Group, a $750 million private equity firm, to create SolarReserve. The private equity fund put up seed capital and Rocketdyne, whose primary business is engineering commercial and military space systems, gave the startup a 20-year exclusive license for its technology. In September 2008, at the height of the financial crisis, SolarReserve raised another $140 million in equity from investors including Good Energies, Citigroup, PCG Asset Management, CalPERS, Credit Suisse and Argonaut Private Equity.
Even with that strong financial backing, the company needs federal support and a state renewable energy mandate in Nevada to get its first plant build, Smith conceded.
Why, I asked him, after more than 20 years of development, isn’t solar-thermal technology ready to compete in the marketplace?
“There are a couple of answers to that,” he replied. “First, the government has always been involved in energy policy. Existing in the tax codes now is support for coal and oil and natural gas and nuclear. They’ve been heavily subsidized for decades.”
Second, U.S. companies need a boost to compete overseas. For example, SolarReserve is developing a 50MW solar thermal plant in Spain along with a local partner.
“We’re competing with the Chinese and the Germans and the Spanish and French,” Smith said, “And all those countries have strong government-supported programs. It’s very difficult to compete in those markets and against those suppliers. Take a look at what happened on the PV [photovoltaic] side. The U.S. developed the technology and now more than half the panels are coming in from outside the U.S.”
He didn’t say so but a third reason for the subsidies is that U.S. energy policy currently fails to capture the full costs of burning fossil fuels -- both the short-term health effects of air pollution and the carbon dioxide emissions that drive climate change. Subsidizing clean energy with loan guarantees or renewable power mandates has proven to be an easier sell, politically, than putting a price on carbon to discourage the burning of fossil fuels.
I asked Kevin whether SolarReserve’s technology can compete on price with other low-carbon energy sources.
“We certainly can compete head-to-head directly with nuclear,” he said. “We compete well with clean-coal technologies.” Wind energy, he said, costs less but the wind tends to blow during periods of lower demand for electricity.
Kevin told me that SolarReserve has set its wholesale price for electricity at 13.5 cents per kilowatt-hour, which is higher than today’s electricity prices, but it escalates at just 1 percent per year for the 25-year period of its contract with NV Energy. That’s partly because there are essentially no fuel costs for a solar plant. The average retail price in Nevada for 2011 is just under 12 cents/kWh while the U.S. average retail prices is 11.52/kwH. Electricity prices have been rising about 3 percent a year since 2000, though, so if they continue to do so, SolarReserve’s price would fall below the retail price in about six years, Smith calculated.
So this isn’t a bad deal for NV Energy, consumers or, more importantly, anyone concerned about the climate issue. Provided all goes well with the plant, and the DOE gets its loan paid back, everyone should come out a winner. And the company will have shown that, at least in the sunnier parts of the U.S., the sun’s energy can provide baseload power. |
Tool making is a rapidly evolving industry, driven by the increase in trades that require tools. Here at Accura Engineering, leading specialists in precision engineering, we are proud to design and manufacture machined tool solutions for a wide range of industries. Decades of experience in the industry have enabled us to expand our skills and knowledge, allowing us to create precision engineered components to the highest of standards, shortest lead-times and competitive prices. We have developed strong relationships with world class suppliers, in order to gain access to the finest materials and resources, which we are then able to pass on to our customers. With a flexible facility servicing a diverse number of customer sectors and products, we have developed the ability to plan and create multiple different engineered solutions here at Accura, including engineering fixtures.
What is an Engineering Fixture?
Engineering fixtures are typically precision engineered mechanical devices used predominantly in the manufacturing industry. Their primary purpose is to hold product(s) during a production process, including finishing operations. Engineering fixtures,
which are also known as assembly fixtures, hold components in place to ensure that each and every product is produced (or finished) to the same standard and specification every time; promoting consistency and efficiency during each individual process. These precision machined devices are extremely popular due to the fact that the use of them reduces the requirement for additional labour, and increases the speed, efficiency and conformity of the production process. There are many different types of engineering fixtures including checking fixtures and CMM fixtures, both of which we can design and manufacture here at Accura, using our special purpose machines.
Engineering Fixtures at Accura Engineering
All of our engineering fixtures are made using high quality materials and machinery to produce precision engineered solutions. Currently, our products are used by some of the world’s most vital industries, including the aerospace, defence, automotive and rail industries.
If you are interested in finding out more about any of our fixtures, including our presswork checking fixtures, please call 01902 454460. Alternatively, you can visit our website at special purpose machines.
If you have found this blog helpful, you may wish to read our previous blog on Precision Grinding. |
- Our Philosophy
- Our Services
- Technical Stuff
- Stupid Stuff
- Contact Us
Sustainable design is not a new idea, just a new name. In the 1970s environmentally responsible and energy efficient building technology was a new idea. Exciting technologies such as solar energy emerged and there was active experimentation in alternative building technologies such as the revival of the ancient craft of timber framing. This movement was not embraced by the mainstream architectural and engineering professions at the time, but was generally dismissed as a hippie movement (sort of like ending the war or eating healthy food).
We don’t just talk the talk, we walk the walk.
During the early 1970s, Jim DeStefano was an engineering student and became fascinated by this movement to develop new and more socially responsible building technologies. He participated in a research project to retrofit a Philadelphia rowhouse with solar heat and he studied alternative building technologies outside the confines of his more traditional University engineering curriculum. His interest in socially responsible building technologies has never waned.
Through the next two decades, environmentally responsible behavior became politically incorrect. Environmental activists were branded “tree huggers” and environmental consultants limited their activities to assisting clients in dealing with regulatory agencies. There were some advancements in energy conservation motivated by building owners’ desire to save operating expenses, but these efforts were tempered by low energy prices.
As the 21st century dawned, Architects and Engineers rediscovered the idea of environmentally responsible design and christened it with the name “Sustainable” or sometimes “Green.” Many still think this is just a fad and if we wait awhile it will pass and we can get back to doing things the way they have always been done. We at DeStefano & Chamberlain, believe that sustainable design, like rock and roll, is here to stay.
Jim DeStefano has been a leader in promoting sustainable design in the structural engineer profession. He is a key member of the SEI Sustainability Committee and is one of the contributing authors of the Sustainability Guidelines for the Structural Engineer published in 2010.
At DeStefano & Chamberlain we believe there is more to sustainable design than scoring a few LEED credits for adding fly ash to concrete or acknowledging that structural steel is a recycled material. It is about doing things that make sense. Sustainable design initiatives are incorporated into every project we work on, regardless of if they are LEED certified. It is not about hanging plaques on the wall.
Jim DeStefano practices sustainable design in his own life. In a house that he built for himself, he incorporated geothermal heating and cooling, insulated concrete forms, structural insulated panels (SIPs) as well as responsible forestry management of the site. We don’t just talk the talk, we walk the walk. |
Grain Alliance strategy is to cultivate and operate as efficiently as possible in a concentrated geographical area.
Grain Alliance strategy is to cultivate and operate in a concentrated geographical area. The radius between the farms is approximately 80 km. By having farms in close proximity to each other machinery and equipment can be used more efficiently and the cost of transporting tractors, harvesters, between fields is lower. The drying and storage facilities can also efficiently service the existing farmsteads. Fuel is one of the main costs of the operations and by having a high geographical concentration in the operations Grain Alliance aims to create economies of scale and a long term reduction of fuel consumption. |
A large quantity of waste is generated during the production of building
components and after the building demolition. Construction and demolition
activities in Europe are responsible for 40 - 50% of solid waste production
which was estimated over 460 million tonnes per year in EU-27 (about 1.1
tonnes per person per year) excluding excavations. It contains mostly minerals
from the structures.
The construction sector also consumes about half of all natural resources
extracted in Europe yearly, that have very high energy demands on their
transformation into building products. It has been estimated that 40 - 50% of
all extracted raw materials are transformed into building products. The
construction sector uses vast amounts of energy in the first three stages of
the production process: resource generation, resource extraction and
intermediate product manufacture.
Therefore the focus of today’s environmental policy is on the building
end-of-life scenarios and material efficiency. Recycling and material re-use
becomes the common practice, but it is not always environmentally efficient,
and material separation of composite structures is very challenging. Building
elements have often longer service life than the building itself and are,
therefore, suitable for recovery and re-use after deconstruction. However,
component re-use is still not widespread practice because of technological and
institutional barriers. Structural components are not usually designed to be
re-used, even though they are designed for deconstruction in some cases.
Recycling and material re-use is the common practice nowadays, but it is not
always environmentally efficient, and material separation becomes more
challenging in case of composite structures. Building elements have often
longer service life than the building itself and are, therefore, suitable for
recovery and re-use after deconstruction. However, component re-use is still
not widespread practice because of technological and institutional barriers.
Structural components are not usually designed to be re-used, even though they
are designed for deconstruction in some cases. The main barriers for re-using
are (a) long service life of building products, (b) their spatially fixed
nature, and (c) discrepancy between building owners and users.
Reusing of building components has impact on all aspects of sustainability of
the built environment.
The business perspective emphasizes the possibility to increase the “green”
image of the product, reducing the waste charges, challenging the growing need
for housing removal, developing new competences and work possibilities, and
application of smart and modular building systems.
The ecological perspective impact will be in reduced waste generation and
natural resources consumption that can reflect in higher eco-labelling
(BREEAM, LEED, …).
The social perspective challenges the migration to urban centres, the values
and living environment change, and new legislation.
Key roles in re-use process
Designers - Designers have one of the most important roles in
structural elements re-use. Their documentation, drawings and instructions
significantly affects the effort needed at the building deconstruction. Not
only selected components and technologies are important, but also the way how
the final design documentation availability will be secured for the whole
building’s life span. The maximization of environmental, cultural and
financial value at the end of building’s life should be considered already in
the design stage. Designers have to get access to the information about the
actual and potential reclaimed components supply, sizes and material grades
and they need to be flexible to adapt to current situation.
Owners and investors - Re-use project can be successful only when it is
fully supported by the building owner or investor. Therefore the building
owners and investors are equally important as the designers. They need to
understand the process and its advantages and drawbacks. Education and
demonstration of successful cases should be the way to increase building
owners and investors’ motivation.
Raw materials - The increasing demand for building materials is
creating great pressure on natural resources. Moreover, the raw materials are
becoming scarce and more expensive. Material extractors will have to adapt to
this change in order to avoid reducing their operations and profitability.
Material producers (mills) – The production process vary with the
material. Raw material and in most cases also recovered waste (e.g. steel
scrap) is utilized to produce new building material that is sold to the
service centres or fabricators directly.
Service centres - Service centres are businesses that inventory and
distribute materials for industrial customers and perform first-stage
processing. They act as intermediaries between the producers and the
fabricators, and other end users.
Fabricators and erectors - Fabricators purchase materials from service
centres or directly from producers and fabricate the individual components
that are needed to assemble a building. Some fabricators also have their own
erection crews to assemble the components at the building site. Others
subcontract the erection to independent organisations. Fabricators may send
any waste and offcuts back to mills for recycling, usually through a dealer.
Some of the fabricators will occasionally dismantle old structures and
re-fabricate the reclaimed elements for new uses. A minority may have a small
stock of building parts that has been reclaimed waiting for appropriate new
Buildings – The way how buildings can be assembled to maximize the
usefulness and value of components at the end of a buildings life needs to be
clearly demonstrated to the construction industry. The growing number of
projects successfully shows how components from an old building or structure
can be re-used in a new building reducing the environmental impact, but the
communication of such successful cases to the construction practitioners is
not sufficient at the moment.
Demolition – Current building removal practices predominantly mean a
process of destructive demolition by heavy machinery. There is a perception
that manual extracting of building elements from buildings for re-use leads to
additional problems and costs. However, even separation of re-usable
components from the demolition waste may lead to significant recovery amounts.
Disassembly - The re-use of components can be maximized only when
careful disassembly is carried out. Many projects have shown that disassembly
is possi-ble and should be considered. The volume of disassembled building
components will increase as the demand for them increases.
Salvage yards - Salvage yards store building elements for re-use and
recycling. A few salvage yards will extract components when they recognise
potential for re-use.
Material dealers - Dealers sell waste materials for recycling and
re-use. Material is sorted, graded, batch, and sold back to producers for
recycling. These organisations will also often buy waste materials arising
during fabrication and from other sources. Material dealers will often try to
sell reclaimed material directly from the demolition site.
Design codes - The benefits of re-use can be greatly improved if
building codes emphasize the environmental aspects of the construction and
give designers more opportunities for material sourcing. The immediate goal
should be to enable structural elements re-use by establishing clear rules for
the material grading and safety of structures designed from reclaimed
Design tools – The rapidly developing area of design software is
currently able to offer many useful tools for the environmental optimization
of buildings. As the buildings components are physically re-used, they can be
re-used also digitally. The implementation of building information models
(BIM) is essential to manage smooth transfer of building elements between two |
Today, ball bearings are used in lots of machines and industries to make improvements in the movement with a reduction in the friction. While going to design to ball bearing, the designer will decide its size and size according to its application and requirement. The design of bearings may look very simple but it is not that easy as a designer. Any factors are considered by designers before starting the manufacturing process for ball bearings four different kinds of machines and applications.
Ball Bearing Rollers
Bearings are designed in different ways according to their application in a specific machine. There are some of the specific factors of designing of the spherical roller thrust bearing which makes it very common and popular. Let's check that all -
It's high operating speed - In the different applications, the operating temperature and speed will also change. For example, the operating temperature and speed will be higher in spherical roller thrust bearing products. Therefore, these factors are always considered on priority while going to design for any ball bearing product for specific applications.
The great size of bearing - In the bearings, the size of Roller always matters to provide a better performance. The big size of rollers is always better to provide more strength and reduction in the friction in heavy machinery and appliances. In the larger machines, the bearings are used with large size of rollers for better capacity and performance for the reduction of resistance.
The numbers of rollers - In different sizes of bearings, the number of Roller rows is changed to provide better performance. The designer always considers a number of rollers and rows is a very important factor while designing any bearing for any specific application. The numbers of rollers, as well as rows in bearings, will also depend on the size of rollers and application of bearing.
The bearing property being like - Ball bearing products are made with different materials as per their applications and use. The material decides the strength and performance of the bearing according to the load on it. For example, ceramic bearing products are considered as good option in fishing reels because of their light weight and high strength.
The optimum performance of bearing - There may be an extra load that can make changes in the performance of ball bearing rollers in different machines. Whether it is an automobile or any home appliances, The performance of bearing under specific load conditions is always considered as a primary factor while designing these products.
The design of the automobile bearing will consider all these factors before designing bearing for specific applications. Because of these reasons, you will find different sizes and different types of bearings in market. The performance and application of bearings will also depend on their design and all these factors. |
The energy challenge is regularly on top of the world’s political, environmental, and economic agenda. Pushed by demographic growth, aspirations for better living conditions, and the development of digital technologies, electricity consumption is set to double between 2010 and 2030.
As a result, the use of renewable energy becomes a key factor in solving the equation between growing energy demand and the need to reduce greenhouse gas emissions. However, the increase in the introduction of renewable energy into the grid, which has been primarily designed to carry electricity from large power plants to consumers, creates new challenges. More popular renewable energy sources, like solar and wind, are intermittent by nature, which means energy production from these sources can be unpredictable or varied.
The rule of thumb for transmission and distribution grids is that what comes in should be equal to what goes out. If a source of energy suddenly produces less, the demand should be reduced (which is not always possible), or another source of energy should compensate. The same principle applies if a source of energy suddenly produces more; then more energy needs to be utilized. Intermittent renewable energy sources like solar and wind consequently require more flexibility from other energy sources connected to the grid. But this flexibility has a price and is not always technically feasible. To combat this challenge, excess energy needs to be stored and then released when needed.
Energy storage is not completely new to electricity grids. In many places in the world, excess electricity is used to step up water to high altitude dam lakes. When energy is required, pumped water is released and sent to turbines that produce electricity back. Water is then collected in low altitude dam lakes so that the cycle can start again. But this solution is not applicable everywhere. It requires mountains to make up the difference in height between the high altitude and the low altitude lake. The number of locations in the world where this can be implemented is limited. Additionally, the response time of this solution is not exactly compatible with the response time needed by renewable intermittencies.
The recent rise of mobile devices and promises of electrical vehicles have boosted technological developments in the field of electricity storage. New battery, supercapacitors, and flywheel technologies are popping-up on the market more frequently, and offering new and exciting perspectives for electricity storage integration into grids. New battery technologies like lithium-ion, sodium-sulphur, and flow batteries, are today the most advanced and can already be used to build large scale energy storage systems associated with intermittent renewable energy sources, or connected at different grid nodes.
A battery-based energy storage system is composed of two main elements: a large battery made by assembling a great number of battery cells most often located in a shipping container, and a power conversion station. At the heart of the power conversion station, one or several inverters ensure the conversion of DC energy coming from the battery to AC energy sent to the grid, or reversely the conversion of AC energy coming from the grid to DC energy used to reload the battery. When the energy storage system is associated with intermittent renewable energy sources like PV or wind, the AC energy used to reload the battery can also come from PV arrays or from wind turbines.
Stay tuned next week when we will publish a follow up blog post outlining some of the applications of a battery-based energy storage system. |
AFFF compound for extinction of class A and class B (hydrocarbon) fires. It contains fluorinated and hydrocarbon surfactants in order to allow the formation of an aqueous film on the surface of most hydrocarbon fuels, reducing vapour leaks and preventing the contact with the oxygen.
The foam produced in CAFS (Compressed Air Foam Systems) with this concentrate at 0,5% in water improves quality over the foams produced with conventional equipment, showing fastest extinction times and better burnback resistance; this foam can be used efficiently at very low application rates.
CAFILM is designed to be used with low expansion foam equipment (nozzles, monitors, foam chambers, etc.), non-aspirating devices (water spray nozzles and standard sprinklers).
CAFILM dilution rate is available at 0,5% to perform extraordinarily on fresh or sea water. It may be proportioned with standard equipment (in-line inductors, bladder tanks, balanced pressure systems, etc.)
CAFILM range is certified by:
CAFILM is highly biodegradable and it is manufactured according to “C6 fluorocompounds” fulfilling the 2010/2015 EPA PFOA Stewardship Program.
Please contact us for further information |
On Tuesday, December 11, Xcel Energy announced the company will go 100 per cent green. This comes after the announcement that Xcel Energy is reducing its carbon emissions 60 per cent by the year 2026 and 80 per cent by 2030.
Xcel Energy is Colorado’s largest utility company and serves seven other states as well. The company’s commitment to a carbon-neutral future is a light in the darkness, coming out of the American government’s seeming indifference to the profit potential of renewable energy. Americans in the eight states served by Xcel can breathe a little easier, at least knowing that their utility company is right-minded.
In this edition of Renewable Energy News, read about how the major US utility Xcel Energy plans to go 100 per cent green by the year 2050.
Renewable Energy News: Major US Utility Xcel Energy Goes 100% Green
Xcel Energy calls Minneapolis, Minnesota home. The utility holdings company has over 3 million electric and nearly 2 million gas customers in the eight states that it serves. Xcel Energy operates subsidiary utility companies in New Mexico, Texas, Colorado, South Dakota, North Dakota, Wisconsin, Minnesota, and Michigan.
The announcement by Xcel Energy to deliver 100 per cent clean energy is the first declaration of its kind in the country. No other US utility has made as lofty a goal, but many have gotten onboard the renewable-energy train.
The Indiana utility NIPSCO is closing down coal plants faster than expected in order to set up renewable energy plants. Another Midwest frontrunner, Midwest Utility MidAmerican, says that by 2020 it will deliver 100 per cent renewables energy.
Utility companies across the Midwest are making the transition to renewable energy, for several reasons. To begin with, the midwest of the United States has less coal and gas resources than the coasts and the southern United States. But, the biggest reason is financial.
What’s the Difference Between ‘Green’ and ‘Renewable’ Energy?
Energy is the big umbrella. Underneath the main ‘Energy’ umbrella, sits conventional energy and renewable energy. Green energy is a subset of renewables and sits under the renewable energy umbrella.
Green power is the best kind of energy production for the environment. It is best for reducing carbon emissions and carries a zero-carbon footprint. According to the EPA, green energy is harnessed from geothermal, solar, wind, biogas, biomass, and low-impact hydroelectric sources.
Renewable power is that which comes from energy that naturally restores itself over some time. These include the wind, water, sun, and heat from the core of the earth.
Some renewable energy sources can carry an environmental impact that makes them less than 100 per cent green. For example, large hydroelectric dams and tidal energy machines affect the oceans plant and animal life. Dams change the migratory patterns of marine life and clear out the surrounding wildlife that relies on the waterways for sustenance.
The difference between renewable and green energy is small. It distinguishes renewables that are zero-impact from renewables that have a slight or moderate impact.
Proponents of nuclear energy make an argument for the clean and renewable nature of nuclear fission plants. Nuclear plants, however, are not carbon neutral. You must mine the earth to build the plant, store radioactive material for a long period of time, and run the risk of a nuclear meltdown.
You can make an argument for nuclear as a renewable but not a green energy source. Green energy includes anything that has a zero-impact on the environment.
How is Power Generated Currently?
Power is electricity. The power grid of the United States provides accessible power to homes and buildings, produced by electricity generators. To make a generator create electricity, the generator must spin a turbine which produces an electrical current.
The electricity produced by the generators spinning of a turbine can be used immediately, or, stored in a battery for later use. The standard method of making the generator turbine spin is steam heat.
Heat is the crux that brings you back to fuel. Utilities burn fossil fuel to heat water, which creates steam and rises to spin the generator’s turbine. This produces electricity.
None of this, however, has changed Washington’s stance on the global climate change outlook. The White House seems happy to increase coal production tenfold.
The simple process of spinning the turbine with steam is a necessity that brought the world to its current dependency on fossil fuels.
So, when thinking of how best to solve an energy crisis we must look for methods of spinning the turbine that is just as effective-yet environmentally, socially, and economically responsible.
So far, making the generators turbine spin has put the world in a deep pit of problems. But Xcel Energy’s announcement on Tuesday stands as proof that the resources and technology to create zero-emission power is not only available but cheaper than ever.
Why are Utilities Going Green?
Coal is expensive to mine, both in time and human labour. Despite the Trump administration’s stance on bringing back coal, most American utility companies are looking for a cleaner, cheaper source of power to spin the generator turbine.
In the United States, 2019 will see the lowest coal production for nearly 40 years.
Utilities are shifting away from coal, oil, and natural gas in favour of renewables.
A report from the Lizard Investment Bank shows that the cost of utility energy production is cheaper with renewables than coal. Even if the United States government shuns that facts, at least the utility owners can read the writing on the wall.
In fact, the cost to produce energy from renewables is steadily decreasing year over year. That has led many US states to deregulate electricity programs to offer more renewable incentives.
For a utility company, green power is a triple-win. A customer that gets green energy is paying a little more to make up for the infrastructural costs, but they are happy to do so in order to do their part for the environment. It also means that they are less likely to leave for a cheaper service since the customer is invested in their green energy.
What’s in it for Xcel Energy?
Xcel Energy is one of the largest utilities in the United States. The company operates in New Mexico, Colorado, Wisconsin, and Michigan-all four of which just put a Democrat in as Governor.
Xcel already has been leading the industry on cutting emissions, and it sees an opportunity to get a stronghold while the getting’s good.
Customers in Colorado are demanding renewable energy, and customers everywhere are demanding lower rates. Public opinion, mixed with cost savings allows Xcel to expand into new area’s and buy up little utilities.
Renewables are popular for the Western United States where oil, coal, and water are all in short supply. Politicians in these states find a stronghold base of support, contingent on the support of the state for green energy. This trend is working its way East, into Colorado, Michigan, and more.
Xcel receives cheaper renewable bids than coal, and renewables are rivalling the price of natural gas. So much so, that when Colorado cities, like Denver and Breckenridge cried-out for renewables, Xcel energy made a change.
The Power of the People
Sure-if you demand an unattainable thing, you won’t get it. But, when a lot of customers unilaterally tell a company what they want, the company is inclined to listen and change.
It is true: Seaworld no longer takes in killer whales. Circuses all over the United States are no longer using elephants in shows, and Armani doesn’t use real animal fur in their kids’ clothing anymore.
And, when customers say to their utilities, we want renewables, the company listens. The way in which you spend your dollar, where, and for what is the most democratically powerful right you have.
America might not have a federal policy regarding global climate change and cutting carbon emissions, but Americans do. Xcel Energy sees a leadership opportunity where federal leadership is coming up short.
It is to the advantage of the American electorate to support utilities, like Excel Energy, that commit to a zero-carbon footprint. Companies that move forward the public realization of the immediacy of climate change and the long-term process to rectify such, are true American patriots.
Not to mention, the more support Xcel Energy has amongst the electorate, the more political leverage they possess in Washington. So, it makes sense that Xcel will use the increase in revenue and decreased production costs to expand the companies reach.
Utility companies make their money from a rate-of-return on energy infrastructure investments. The mining of coal and natural gas offers little, to no return on their investment. Green energy, on the other hand, provides infrastructural investment return on everything.
Xcel Energy is the first to make such an overarching commitment to the wellbeing of the world, but it certainly will not be the last.
If you like reading renewable energy news, share this article with a friend on social media. And subscribe to the newsletter for all the most recent posts about an all-electric future. Thanks for reading! |
The world was changed forever when the industrial revolution took over. The number of factories on earth multiplied exponentially and a whole new industrial sector was born. Several of them are still in use today but as a more evolved version of their older forms thanks to the new advances in technology. Technology has become so deeply integrated into many industries across the globe and is especially helpful when it comes to using materials for production. The increasing consumer demand has resulted in production taking place on a large scale across the globe. The industrial sector uses various different types of systems such as hydraulic systems to simplify the process and use energy efficiently.
A hydraulic pump uses a system that converts pure mechanical power into hydraulic energy. This can be used to supply a sheer amount of power to a system for it to function effectively and use the energy produced efficiently. The flow that is created as a result can be powerful enough to withstand the pressure that is generated. Hydraulic systems are popular due to the fact that a large amount of power can be generated by using simple tubes and cylinders. There are many different types of hydraulic pumps that are used in industries all over the world. Gear pumps are a popular choice in the world because of the economic factors that accompany it. They are relatively easy to maintain and come at a cheaper cost when compared to the other types. Two different types of gear pumps exist that utilize internal and external gears. Gear pumps that use helical shapes often produce less noise.A hydraulic split flow pump uses the pressure created by the system to separate the flow using dividers. This can have a variety of applications as the generated energy can supply various different functions at a time thanks to the splitting of the output. Each piston generates power that can be used separately thanks to the dividing of the pumping chambers. As a result, this pump can be used for multiple applications. Visit this link http://royaltechnic.com.au/pumps for more info on hydraulic split flow pump.
Bent axis hydraulic pumps consist of displacement types that can either be variables or fixed. Utilizing a hydraulic system much like other instruments such as Heavy equipment jack stands Adelaide, these type of pumps are very efficient and can be relied on. As a result, they are used in heavy machinery which are mobile. Rotary vane pumps are mostly put to use for the purpose of noise reduction and reliability. They employ displacement designs that are large and fixed. With a ring that can be adjusted, large vanes are present around the central rotating unit. These pumps can be easily serviced as well. |
Many people ask us about the difference between member and shareholder. They are similar but distinct, and it is important to know how they differ.
Difference between member and shareholder
The term “member” has different meanings in different contexts. In a non-profit company, a member is someone who has specified rights in respect of – and holds membership in – that non-profit company. They have a similar role and responsibilities as shareholders of profit companies. However, they do not receive dividends or any payment outside of the services they actually provide to the non-profit company.
More commonly, the term “member” is used in connection with close corporations. In a close corporation, a member is a person who is designated as a member in the founding statement of the corporation. They must also be qualified for membership of the corporation. But unlike companies, members of a close corporation have to be natural persons or trusts.
Shareholders hold or own the shares issued by their company. A shareholder’s name also appears in the securities register of the company. Shareholders do have some power in managing the company, but not day to day. Instead, shareholders vote on specific matters at annual general meetings. Different from close corporations, shareholders of companies can be both natural persons and other companies or legal entities.
Members, shareholders and the running of their businesses
Members of close corporations have a very similar kind of role and responsibilities as shareholders of companies. The relationship between members is governed by an association agreement. The relationship between shareholders is governed by a shareholders agreement. Members of close corporations both own the business, and govern the day to day running of it. Companies, however, have split these two roles between shareholders and directors.
Under the new Companies Act, close corporations cannot be registered anymore. We believe that the new Act also provides better protections to shareholders than the old Close Corporations Act does to members. It is also very easy to convert from a close corporation to a company. |
Reppie is a unique facility, not just because of its location at an altitude of 2,300m, a level at which few WtE facilities of this size have ever been built, but because of the lower heat value and higher moisture content found in the waste stream. It was therefore critical that the facility was designed by a firm with extensive experience in constructing facilities with similar waste conditions to Addis Ababa. For this reason the Consortium engaged CNEEC’s affiliate, China Urban Construction Design and Research Institute Co, Ltd (CUCD). CUCD is China’s vastly experienced national design institute who has been responsible for designing dozens of such facilities across China, a country with a waste stream composition that is similar to that found in Africa.
The Consortium combined the requisite local and international reach with the design and financial strength required to ensure the facility could be custom-made for SSA and then scaled to neighbouring countries.
Environmentally-friendly Flue Gas Treatment
Incineration plants have been around for more than 100 years, but modern incineration plants are very different from their forebears. The Reppie Facility has adopted modern back-end flue gas treatment technology which ensures that almost all of the nitrogen oxides (NOx), sulphur dioxide (SO2), heavy metals and dioxins produced by the plant are drastically reduced, thus ensuring the plant operates safely within the strict Emission limits of the European Union. Any residues leftover from the flue gas treatment is recycled or safely disposed of whilst the scrubbed and cleaned flue gas is released into the atmosphere through the Plant’s twin 50 high stacks.
There are thousands of waste-to-energy plants around the world, and most of them employ a similar process to that at Reppie. The unsorted Municipal Solid Waste will be delivered to the waste reception hall by the Municipality’s 16t compactor IVECO trucks. Two semi-automatic grab cranes mix the waste before it is loaded onto the Martin’s SITY2000 Grates (2 x separate lines) where the waste is combusted. Martin’s moving grates are world leaders and ensure an optimal burn-off of the diverse waste stream. The Plant is designed to accept a calorific value range of 5.5 – 9.5 MJ/kg of waste. Over 80% of this waste is eliminated and what remains is converted into ash. The bottom ash will be sold as a building material to the local construction industry or safely used as landfill cover in the new Sendafa Landfill site. The Facility uses magnets to recover steel and other ferrous metals for additional recycling. The Facility’s energy-recovery comes from the generation of superheated steam to drive a 25 MWe steam generator and produces an expected 185 GWhr of electricity every year. To meet the demands of the Employer, there are two sets of 25MWe turbines for redundant operation, which will ensure increased plant availability and reliability. |
Making Chemicals the Bio Way
People have been producing chemicals and compounds by biotechnological methods for many centuries. Today's biobased chemicals are produced from renewable resources either by fermentation, or by chemical and enzymatic conversion, followed by complex product isolation and cleaning procedures.
Typical products of the already well-established biochemicals industry include amino acids, citric acid, thickeners such as xanthan gum and carrageenan, yeasts and yeast extracts, starter cultures and technical enzymes.
A major growth area for the biochemicals industry is in the biotechnological production of chemicals and compounds that would traditionally be derived from petrochemicals. Processes have already been developed to produce a range of these platform molecules using renewable resources, which will reduce our reliance on non-renewable resources.
GEA has longstanding experience in project design and project management for large industrial biochemical plants. We can offer integrated solutions, and complete processing lines for the biochemical industries. Our expertise spans a comprehensive range of process technologies for production, purification and concentration, including:
- Biomass separation by centrifuges or membrane filtration
- Product isolation and purification by distillation, melt crystallization or membrane filtration
- Concentration by evaporation
- Crystallization and drying of the final product
- Concentration and drying of byproducts by evaporation and drying
GEA’s development centers and global process and technology expertise can support customers in the optimization of their processes and scale up, to derive and implement the most efficient and sustainable processes for biochemicals production.
Traditional and new bio-chemistry
Many chemicals have been produced by biotechnological methods for many centuries. Typical products of this well-established industry are
- Amino acids
- Citric acid
- Thickeners like Xanthan Gum and Carrageen
- Yeasts and yeast extracts
- Starter cultures
- Technical enzymes
GEA has great experience and large number of references in this traditional biochemical industry mainly for
- Concentration of products and byproducts by evaporation
- Biomass separation my mechanical separation or filtration
- Solvent recovery for extraction and precipitation processes
- Product and byproduct drying
Platform bio-chemicals and bioplastic monomers
A new trend in chemical production with promising potential and huge growth rates is the substitution of petrochemicals with biochemical. These so called platform chemicals will create a renewable chemistry in the near future. Processes have been developed to produce different platform-molecules like:
- Succinic acid
- Lactic acid
- 1,2 / 1,4 Butanediol
- 1,2 / 1,3 Propandiol
- Azealactic acid
and many others.
GEA offers Equipment, Technology and Process competence the complete downstream-processing of these substances.
- Cross-flow membrane filtration
- Mechanical separation
- Melt crystallization
- Distillation & Rectification
- Solution crystallization
- Drying and packaging
The GEA development centers worldwide support our customers to scale up and optimize their processes to maximize process feasibility. |
Stamp Mill For Gold Ore Crushinggtbschool. stamp mill for gold ore crushing-maintanance The Argo Gold Mine,Mill and Museum is a National Historic Site in
ZIMMERMAN GOLD ORE STAMP MILL. John Zimmerman and his family had a huge impact upon the early history of the Poudre Canyon. John was known for his
This Stamp Mill has ten stamps with each one weighing 450 lbs. each. The purpose of the Stamp Mill is to crush gold ore. Due to the fact that oil makes gold
ArrastraWikipedia. An Arrastra (or Arastra) is a primitive mill for grinding and pulverizing (typically) gold or silver ore. The simplest form of the arrastra is
Page 187In running 25 to 30 stamps with the average grade of ore — 15 pennyweights — to the ton, the supply for a barrel is obtained each day.
· Stamp Mill Homemade depression era for gravel type gold mine ,California Mother Lode areaDuration 2 28. smart0009 14,569 views
In the field of extractive metallurgy, mineral processing, also known as ore dressing, is the process of separating commercially valuable minerals from their ores.History ·
You can read The Stamp Milling of Gold Ores by Rickard, T. A. (Thomas Arthur), 1864- in our library for absolutely free. Read various fiction books with us in our e 10/10(2)
Stamp mills are used by miners to crush valuable ore and extract the metals within them. Gold, silver, and copper are common metals that are found within ore, and
· Stamp Mills for Processing Gold Silver Copper Ore. Stamp mills are used by miners to crush valuable ore and extract the metals within them. Gold, silver
Catalog Record The stamp milling of gold ores Hathi Trust Digital Library Navigation. Home The stamp milling of gold ores By
Antique Gold Ore Impact Mills. Stamp mill was adopted to crush gold ore. If you want gold ore powder, gold flake after gold crusher equipment process,
Click to view on Bing2 23Stamp Mill Homemade depression era for gravel type gold mine ,California Mother Lode areaDuration 2 28. smart0009 Author Britt Griffith
Book digitized by Google and uploaded to the Internet Archive by user tpb.
Milling Ore at Bodie . By. Just five months after the 30-stamp mill was completed in 1880, The Milling of Gold Ores in California.
The stamp milling of gold ores by Rickard, T. A. (Thomas Arthur), 1864- at OnReadthe best online ebook storage. Download and read online for free The stamp
Century-old working stamp mill for crushing ore at Reed Gold Mine. Image courtesy N.C. Historic Sites, Division of Archives and History.
Here is an Ancient Gold Ore Milling Process used in the large majority are 6-stamp Gold Extraction Comments Off on Gold Milling Process -Primitive and Basic.
Gold Stamp Mill for Sale,Gold Stamp Mill Zimbabwe,Gold Stamp Battery Marble. Gold-Milling. Free-milling gold ores, in which the ore is oxidized, can be well ami
Book digitized by Google from the library of the University of Michigan and uploaded to the Internet Archive by user tpb.
Copyright © 2012-2017.MILL Co., Ltd. All rights reserved. SiteMap |
Forging is a process involving the shaping of metal using localized compression forces. The blows are delivered with a hammer, (often a power hammer) or a die. Forging is often classified according to the temperature at which it is performed: cold forging (type of cold working). warm forging or hot forging (type of working) for the latter two the metal is usually heated in a forge. Forged parts can range in weight from less than a kilogram to hundreds of metric tons. Forged parts are widely used in mechanisms and machines whenever a component requires high strength. Forging usually requires processing (machining) to achieve a finished part. |
Process costing is the allocation of production costs to output units. The production process usually involves multiple stages and business units. The first-in first-out inventory valuation method assumes that the first items into inventory are the first items used in production. The weighted average cost is equal to the total cost of all inventory items divided by the number of units.
According to the Accounting for Management website, the main difference between the FIFO and weighted average method is in the treatment of beginning work-in-process or unfinished goods inventory. The weighted average method includes this inventory in computing process costs, while the FIFO method keeps it separate.
Costs for raw materials and conversions are proportionately allocated to equivalent units, which include finished and unfinished goods. Conversion costs include direct labor and factory overhead costs. For example, if 100 units of ending work-in-process inventory used 75 percent of the purchased raw materials and 60 percent of the conversion costs, then the equivalent units for process costing purposes are 75 units (100 x 0.75) and 60 units (100 x 0.60), respectively. If 100 additional units were completed and shipped to customers, then the equivalent units are 175 (100 + 75) and 160 (100 + 60) for raw materials and conversion costs, respectively.
The beginning work-in-process inventory is subtracted from the totals in the FIFO method. Continuing with the example, if the beginning work-in-process inventory consisted of 20 units, and it includes 100 percent of raw materials and 50 percent conversion costs, then the equivalent units are 20 (20 x 1.00) and 10 units (20 x 0.50), respectively. Therefore, the ending work-in-process inventory contains 55 (75 - 20) and 50 (60 - 10) equivalent units for raw materials and conversion costs, respectively. Therefore, using the FIFO method, the total equivalent units are 155 (100 + 55) and 150 (100 + 50), respectively.
Equivalent Unit Cost
The beginning inventory costs and additional costs incurred in a period are combined in the weighted average method. Continuing with the example, if the total raw material costs under the weighted average method are $1,250, then the equivalent unit raw materials costs are about $7.14 ($1,250 / 175). If the conversion costs are $3,500, then the equivalent unit conversion cost is about $21.88 ($3,500 / 160). Therefore, the total equivalent unit cost is $29.02 ($7.14 + $21.88).
Under the FIFO method, the beginning work-in-process raw materials and conversion costs are excluded. If these were $250 and $1,000, respectively, then the equivalent unit costs are about $6.45 [($1,250 - $250) / 155 = $1,000 / 155 = $6.45] and about $16.67 [($3,500 - $1,000) / 150 = $2,500 / 150 = $16.67]. Therefore, the total equivalent unit cost using the FIFO method is $23.12 ($6.45 + $16.67).
The raw materials and conversion costs are assigned to the completed and work-in-process units. To conclude the example, under the weighted average method, the completed unit cost is $2,902 (100 x $29.02), the work-in-process cost is about $1,848 [(75 x $7.14) + (60 x 21.88)] and the total cost is $4,750 ($2,902 + $1,848). In the FIFO method, the completed unit cost is $2,312 (100 x $23.12), the work-in-process cost is $1,188 [(55 x $6.45) + (50 x 16.67)] and the total cost is $3,500 ($2,312 + $1,188). |
Q: There’s a lot of talk today about solar and wind power, but what about biomass? How big a role might this renewable energy source play in our future? Couldn’t everyday people burn their own lawn and leaf clippings to generate power?
– Deborah Welch, Niagara Falls, N.Y.
A: The oldest and most prevalent source of renewable energy known to man, biomass is already a mainstay of energy production in the United States and elsewhere. Since such a wide variety of biomass resources is available – from trees and grasses to forestry, agricultural, and urban wastes – biomass promises to play a continuing role in providing power and heat for millions of people around the world.
According to the nonprofit Union of Concerned Scientists (UCS), biomass is not only a renewable energy source but also a carbon neutral one, because the energy it contains comes from the sun. When plant matter is burned, it releases the sun’s energy originally captured through photosynthesis. “In this way, biomass functions as a sort of natural battery for storing solar energy,” reports UCS. As long as biomass is produced sustainably – with only as much grown as is used – the “battery” lasts indefinitely.
While biomass is most commonly used, especially in developing countries, as a source of heat so families can stay warm and cook meals, it can also be utilized as a source of electricity. Steam captured from huge biomass processing facilities is used to turn turbines to generate electricity. Of course, biomass is also a feedstock for several increasingly popular carbon-neutral fuels, including ethanol and biodiesel.
According to the federal Energy Information Administration, biomass has been America’s leading nonhydroelectric renewable energy source for several years running through 2007, accounting for between 0.5 and 0.9 percent of the nation’s total electricity supply. In 2008 – although the numbers aren’t all in yet – wind power probably took over first place due to the extensive development of wind farms across the country.
According to the USA Biomass Power Producers Alliance, generating power from biomass helps Americans avoid some 11 million tons of carbon dioxide emissions that burning the equivalent amount of fossil fuels would create each year. It also helps avoid annual emissions of some 2 million tons of methane – which, as a greenhouse gas, is 20-plus times more powerful than carbon dioxide is.
The largest biomass power plant in the country is South Bay, Florida’s New Hope Power Partnership. The 140-megawatt facility generates electricity by burning sugar cane fiber (bagasse) and recycled urban wood, powering some 60,000 homes as well as the company’s own extensive milling and refining operations. Besides preserving landfill space by recycling sugar cane and wood waste, the facility’s electricity output obviates the need for about a million barrels of oil per year.
Some homeowners are making their own heat via biomass-fed backyard boiler systems, which burn yard waste and other debris, or sometimes prefabricated pellets, channeling the heat indoors to keep occupants warm. Such systems may save homeowners money, but they also generate a lot of local pollution.
So, really, the way to get the most out of biomass is to encourage local utilities to use it – perhaps even from yard waste put out on the curb every week for pickup – and sell it back to us as electricity.
Got an environmental question? Write: EarthTalk, c/o E – The Environmental Magazine, Box 5098, Westport, CT 06881. Or e-mail: [email protected]. |
While power generated from wind and solar sources is growing in modest amounts, global nuclear generation capacity has fallen from last year's record of 375.5 gigawatts to 365.5 gigawatts in 2011. Certainly, 10 gigawatts is a small drop in absolute terms as well as percentage-wise, but it is a 10-gigawatt decline. For comparison, 10 gigawatts is half of the global solar market or about one-quarter of the peak California ISO load.
It's probably too early to call the declining reliance on nuclear power a trend, but high costs, recent disasters, and government moratoriums on nuclear are creating a difficult environment in which to design and build new nuclear. Additionally, an aging fleet of reactors, which are in some cases already past their expiration date, are going to require de-commissioning or massive overhauls. And a general slowdown in electricity usage, along with low natural gas prices will not encourage nuclear build-out. Just 10 of Japan's 54 nuclear reactors are connected to the grid; China halted construction on 25 reactors right after the Fukushima explosions; and Germany and Switzerland announced their intention to phase out nuclear plants following the disaster.
Worldwatch Institute's Vital Signs Online (VSO) report indicates that "nuclear's share of world commercial primary energy usage fell to around five percent in 2010, having peaked at about six percent in 2001 and 2002." Just four countries -- the Czech Republic, Romania, Slovakia, and the United Kingdom -- actually increased their share of nuclear power in an appreciable way between 2009 and 2010.
The trend shows signs of continuing. According to the report, "Although 16 new reactors began construction in 2010, the highest number in over two decades, that number fell to just two in 2011, with India and Pakistan each starting construction on a plant."
The total number of reactors in operation around the world has declined from 441 at the beginning of the year to 433.
China and the U.S. are the exception to the decline in global nuclear electricity generation. China was responsible for 10 of the 16 reactor construction starts in 2010 and began to install nearly 10 gigawatts of capacity in 2010, representing 62 percent of capacity construction worldwide. China currently has 27 reactors and 27 gigawatts of capacity under construction, according to the report.
The United States doesn't seem to be retreating from the nuclear power cause either. In 2010, the Obama administration approved $8.3 billion in loan guarantees for the construction of nuclear reactors, and recent budget proposals have raised that figure by an additional $36 billion.
Further highlights from the report:
- China, India, Iran, Pakistan, Russia, and South Korea have contributed around 5 gigawatts of new installed capacity since the beginning of 2010.
- During this same period, nearly 11.5 gigawatts of installed capacity has been shut down in France, Germany, Japan, and the United Kingdom.
- Germany alone has taken around 8 gigawatts of installed nuclear capacity offline this year.
- Currently, 65 reactors are under construction around the world; however, 20 of these have been under construction for more than 20 years.
- Construction of the first nuclear power plant to be built in France in 15 years has been delayed until 2016, and its projected cost has grown from approximately $4.4 billion to approximately $8 billion.
- The average age of decommissioned reactors worldwide has risen to 23 years.
- In 2009, the U.S. Nuclear Regulatory Commission received 26 nuclear reactor permit applications, but only four of those sites have plans for construction. |
Do you think that this is an operational methodology or a philosophy? Please explain. The Goal is a management-oriented novel that focuses on the concepts of systems management. The fictional novel focuses around Alex Rogo and the problems in his production plant. The plant is constantly behind schedule and unprofitable. Alex is given three months to turn things around or the plant will be shut down.
The Goal introduces the “Theory of Constraints (TOC)” which is an overall management philosophy that adopts the idiom “A chain is no stronger than its weakest link”. This emphasizes how organizations and processes are vulnerable because that weakest link can always adversely impact and damage the company. The “goal” is to make money and anything that assists in doing this is productive, while anything that hinders this is a bottleneck. The Goal goes on to identify bottlenecks (constraints) in the manufacturing process and how identifying them helps reduce impact and allows for controlling the flow of materials.
One of the main emphasis points throughout is the communication element. Whenever a problem arises the team discusses the problem amongst themselves to find a solution. The discussion process uses five steps to solve any problem: 1. Identify the system’s constraint 2. Decide how to exploit the system’s constraint 3. Subordinate everything else to the prior decisions 4. Elevate the system’s constraint 5. If, in the prior steps, the constraint has been broken, go back to step one. The line between an operational methodology and a philosophy in my opinion is not clear cut black and white.
An operational methodology in certain instances can be well defined. Let’s start with a simple example of putting a computer together. The steps could be: • Installing a motherboard • Installing the Processor • Installing the CPU Cooler • ….. While not all computers contain the same parts or pieces, from an abstract layer all computers do require a standard set of pieces to run. This high level standard could be the operational methodology for assembling a PC. It would be tough to argue that this is a philosophy.
Now if we are to get into more detail and begin discussing processor optimization, this could blur the lines of operational vs philosophy based on proven facts and research. The actual definition of philosophy as stated by Google is “A set of views and theories of a particular philosopher concerning such study or an aspect of it”. With that being said I do feel that The Goal is abstract enough to be considered an operational methodology. The thesis of The Goal is that the goal of an organization is to increase throughput while simultaneously reducing both inventory and operating expense.
That statement can be applied to all businesses. All organizations have some constraints or bottlenecks. Lack of resources and/or resource allocation is one of the most common and challenging constraints organizations face. A good attempt to try and counter argue The Goal would be to look at one of the world’s most profitable companies in Apple. An initial thought might be what possible constraints could such a large and profitable company have. Apple recently set the records for the most valuable company in history with a net worth of $624 billion.
However, this has not come without restraints or bottlenecks in the process. One of the major issues Apple has had for a while is in the manufacturing process. At multiple points in time Apple was having difficulties manufacturing enough iPhones to meet the demand. This of course was a constraint on the system and on the bottom line. More recently, Apple has come under fire about its worker treatment in some of its manufacturing plants in China. These constraints have major impacts on the business and Apple is constantly making an effort to identify them and streamline the process.
If we can apply The Goal to the world’s most profitable company it seems safe to say that other companies in some shape or form will fall under this umbrella as well. [pic] Fig 1. 0 How to apply Constraint Management to a Production Facility? How about to a Bank? Assume that we can apply constraint management! A production facility seems like the classical example in terms of analyzing the Theory of Constraints. As time has evolved production facilities have become larger, more complicated and depend on technology more than ever.
With that said, even the most advanced of processes have at least one constraint (Theory of Constraints principle) and that constraint must be properly managed. Following the 5 step process the first thing we need to do is identify the issues/constraints in the system. This is quite broad and can cover anything from issues with human capital down to bottlenecks in the shipping process. To correctly identify the issue or issues that are holding the facility back, the identification process must be extremely detail-oriented and thorough in the discovery process.
Without this full scope analysis simple errors can occur which in turn will lead to unexpected or undesired results. The next steps would be to exploit and subordinate. Once we actually identify the constraint we have to turn the focus to how to get more production within our current capacity limitations. We have to be extremely careful when doing this as exploiting the constraint does not always ensure output on the other end. In today’s day and age one of the most important factors to look at is the technology behind the processes.
The key question here in production facility is are we using the technology in our process flow to a maximum capacity. Can we revise our business process with the help of technology to maximize our output? Keep in mind, human capital can be considered part of the process inclusive of technology. A perfect case study in terms of identifying constraints from a broad (internet) perspective is seen in the world’s largest online retailer in Amazon. Amazon started as an online bookstore in 1994. Amazon soon exploited not only a constraint in their own company, but a constraint in sales as a whole.
Amazon soon expanded from selling only books to selling electronics, dvds, cds, and the list continued to expand. By creating a one stop shop where a user can buy anything, they found a need in the market and created a monopoly. Studying Amazon’s warehousing and shipping practices further emphasizes their efficiency, but we will leave that detailed discussion for another time. In terms of applying constraint management to a bank, the first steps in a normal climate would to be to identify which department or area has the weakest link.
However, the economy has recently gone through some of the toughest times in history and the landscape has changed significantly. Without understanding the climate and market we are currently in, we are bound to get erroneous results from our analysis phase. With the changing landscape also comes changing laws. Many of the fees that banks were once able to assess without monitoring have now been eliminated. This brings to light a major constraint in generating revenue. Banks now have to be more creative in how to generate the lost revenue from the elimination of many fees and interest rate hikes.
Once again technology can be the focal point here. In the case of banks which store an abundance of data, technology can really help or hinder revenue generation. From the top down, the use of analyzing data through current technologies can help identify trends and opportunities that exist for optimization. This can be seen through investment opportunities, mortgage rate optimization and even SLA (Service Level Agreement) timeframes. Can we detect bottlenecks? When… Yes … and… When … Not? Explain this. When JIT is better than the Goal?
A bottleneck in a process occurs when input comes in faster than the next step can use it to create output. Detecting these bottlenecks however can be very difficult.. When dealing with bottlenecks in a production line, the problem magnifies. The cause can be due to random events (random machine failure) or other changes. A random event failure example in the IT world is a “system out of memory exception”. This error is known to be a huge constraint at random times in overused systems. The issue with this error is that it happens at random times and thus reproducing can be almost impossible.
Before finding the bottleneck the first thing you must do is define the goal of the system you are working on. Simply put, you can’t find the bottleneck if you have no end goal in sight. For most production operations this would be to make money. There are two main types of bottlenecks: 1. Short-term bottlenecks – These are caused by temporary problems. A good example is when key team members become ill or go on vacation. No one else is qualified to take over their projects, which causes a backlog in their work until they return. 2. Long-term bottlenecks – These occur all the time.
An example would be when a company’s month-end reporting process is delayed every month, because one person has to complete a series of time-consuming tasks – and he can’t even start until he has the final month-end figures. Identifinyg bottlenecks in production are normally easier to find than in a business process. In the production line you can identify which point has the most pile up or which process is taking the longest and pinpoint that process. However, in a business process there are many other factors to consider. As an example let’s consider a director at a software development team. The irector oversees the whole team and is trying to identify what bottlenecks occur in the development process.
Let’s assume the team is made of four people. Three of the members are extremely talented however the fourth team member is not pulling his/her weight. You would assume this would come to light but what if the other team members constantly cover for the fourth member and pick up his slack. The work gets done, however production could be much higher if the fourth member was replaced with a more talented member. This bottleneck would be a tough one to detect and may go undetected for a long amount of time.
For reasons like this, bottlenecks in a business processes can be very difficult to detect. The Just in Time philosophy is a stagey aimed at improving a business’s ROI by reducing in-process inventory and associated carrying costs. The JIT philosophy evolved out of the production lines of Toyota and Toyota became a competitive threat to the US in the automotive industry. [pic] Fig 2. 0 Just in Time model JIT/lean manufacturing is well suited to repetitive environments such as those for producing automobiles and consumer electronics; however, it is not a panacea for all production companies. 7] JIT is not well suited to the assembly or fabrication firms and also for small batch or job shop operations,.
While the Goal focuses on defining the weakest link, JIT concentrates on inventory reduction and exposure of waste. In terms of a business process where human resources are part of the process there could be a common ground between the two theories. Again, consider a process where the weakest link is a team member. In this sense that team member unfortunately could be seen as the “waste” in that process.
With that said it The Goal and TOC have significant contributions in sales, marketing and product development just to name a few. JIT has had a huge impact on other industries such as oil, which is almost purely supply and demand driven. Based on your Recipe (Question 2). Develop a plan to apply theory of constraints to the business case: “Paediatric Orthopaedic Clinic at the Children’s Hospital of Western Ontario”. By the way where is the Bottleneck in the case study! (to be uploaded – webcourses ucf – on September 7, 2012)
The case study makes note of the surveys the customers were given which were written forms that 218 patients filled out. Based on those surveys key data points were introduced. To analyze utilization rates for job functions we divided the hours spent on the process by the total hours available for work from that worker. (Figure 3. 0) [pic] Fig 3. 0 Utilization Rates Going back to our main focus from question #2 in terms of technology, the case study does not give details in terms of the technology systems it uses or whether each department is using the same system.
I think this would be an interesting aspect to look at. We are focusing on the process, but we not know the system dynamics behind this. Some interesting questions to ask which are not mentioned: • Do the different departments have effective communication with each other through software? • Are the notes being recorded and shared across all departments? • What are some of the manual processes/ constraints that software could possible help expedite One of the biggest patient complaining points was that of losing money by missing work to take their child to the hospital.
A simple but effective solution for that could be to change the hours of operation to either open very early or close very late so the adults can go during their off work hours. Purchasing additional equipment could also help if the clinic’s research shows that the lack of equipment is a constraint. Lastly, the bottlenecks in the process seem to revolve around the analysis of the wait times and the utilization rates. In The Radiology department the patient wait time at 58 minutes is almost double of any other wait time. |
Plastic Bags and Film
What can be recycled?
Retail bags, dry cleaning bags, bread bags, shipping pillows, sealable or “zippered” plastic food bags, flexible plastic wraps on paper towels, cases of soda, cotton balls, bathroom tissue, and more
Why it wants to be recycled
Plastic bags and flexible wraps, also known as “film,” are recyclable and — like other plastics — can be used to make many other products. Recycling reduces litter, lessens the amount of waste going to landfills, and gives a valuable resource a second life.
How to recycle it
Return plastic bags and films to labeled receptacles, widely available at grocery and retail outlets. Do not include food or cling wrap, prepared food bags, biodegradable bags, or film that has been painted or has excessive glue. Check your local ordinances for specific instructions — this category is easily contaminated with incorrect preparation or the wrong materials.
What does recycled plastic bags and film become?
Once collected, plastic bags and film are baled and sent to recycling centers. Here, the used bags are cleaned, processed, and turned into flakes and pellets. The pellets are used to make new plastic shopping bags, durable outdoor fences, decks, shopping carts, and home building products. |
Platinized Titanium Anode
Definition - What does Platinized Titanium Anode mean?
A platinized titanium anode is simply a titanium anode coated with platinum or platinum metal oxides. Platinized titanium anodes acts as inert anodes and are non-consumable and long lasting. These anodes are insoluble in the electrolyte under the conditions present in electrolysis.
Unlike carbon anodes, platinized titanium anodes do not corrode during the aluminum reduction process and do not release CO2, but rather pure oxygen.
Corrosionpedia explains Platinized Titanium Anode
The change from lead anodes to platinized titanium anodes was encouraged by the introduction of high efficiency fluoride-free baths, which attacked lead anodes more severely than traditional hard chrome plating chemistries. Initially lead rod anodes were replaced with titanium rod anodes, which had a copper or aluminum core for high current applications, and over time the additional advantages of platinized titanium anodes were recognized.
Some of the advantages of platinized titanium anodes are:
- Increased throughput with reduced plating times
- Reduction or elimination of secondary processes, such as grinding
- Anode geometry remains constant over time, allowing consistent optimized plating results
- Long operating life
- Low maintenance
- Higher bath life
- Energy savings and light weight
Some applications where these anodes are used:
- Sewage treatment plants
- Electrosynthesis/chlorate and perchlorate production
- Electroplating and cathodic protection |
Novartis seeks to use indacaterol for seven more years
The active ingredient is made using a solvent that will be banned from 2017
More than 1.5 million patients have been treated for chronic obstructive pulmonary disease using indacaterol-based products. Photograph: Getty Images
Swiss pharmaceutical giant Novartis has told EU regulators it will have to stop producing the active ingredient indacaterol at its Ringaskiddy plant unless it is allowed to continue using a banned toxic chemical.
Sales of indacaterol from the Ringaskiddy plant ranged from 20 million to 65 million Swiss francs (€18 million-€59 million) per year over the 2012-14 period. Indacaterol is used in medicinal products for the treatment of chronic obstructive pulmonary disease in 110 countries.
Novartis uses diglyme, which will be banned for use in the EU on August 22nd, 2017, in the early stage of the manufacturing process for indacaterol. The company has applied to the EU for authorisation to continue using diglyme in Ringaskiddy for another seven years while it puts an alternative manufacturing process in place and obtains the necessary approvals from health authorities around the world.
Human reproductionThe net benefit of the seven-year authorisation would be €45 million-€75 million, Novartis told the European Chemicals Agency.
Diglyme is toxic for human reproduction. A maximum of 24-40 workers are potentially exposed to diglyme for a few weeks each year at the Ringaskiddy plant, Novartis said. Exposure is minimised through use of contained systems and is within the safe limit, Novartis said.
As a solvent, diglyme does not end up in the final product that consumers use. All waste streams and filters are incinerated so “environmental impacts can be considered as negligible”, the firm said.
AuthorisationThe European Chemicals Agency will advise the European Commission on whether the authorisation should be granted. Member states will vote on the Commission’s proposal.
More than 1.5 million patients have been treated for chronic obstructive pulmonary disease using indacaterol-based products. Interruption of supply would result in patients not having access to their medication. The firm argues that the resulting loss of confidence “cannot be quantified but is expected to have far-reaching consequences for all Novartis products”.
“Currently, chronic obstructive pulmonary disease is the fifth leading cause of death worldwide and is expected to become the third by 2020,” the company said.
“Novartis is known as an innovative market leader in this field. [It is] nevertheless facing a very competitive market situation as many new original and generic compounds enter the market.” |
The major procedure to fabricate ethyl acetate includes the esterification of ethanol using acetic acid although some is made by the catalytic form of acetaldehyde with alkoxides. Do you understand Ethyl Acetate Production Technology?
At the primary esterification procedure, a combination of acetic acid and ethanol that has a little number of sulphuric acid is fed and invisibly into an esterifying column in which it's refluxed. The mix removed goes to another refluxing column in which a ternary azeotrope comprising 85 percent ethyl acetate is eliminated.
Ethyl Acetate Production Process is that water is mixed with the distillate after which it divides into two layers. The top layer is fed into a refluxing column where the residue containing 95 percent ethyl acetate is distilled to remove any impurities.
Call Sanlifengxiang now, we will show you our complete Ethyl Acetate Technology and advanced equipment. |
Never Miss Another Post
What History Tells Us
History tells us that leaders who lead with ego can have devastating consequences on their followers. For that reason let’s take a look at a clear definition of ego. Merriam Webster defines ego as the part of the mind that mediates between the conscious and unconscious and is responsible for reality testing and personal identity.
Now, based on that definition alone we can tell why the weight of leadership would be heavy for anyone! If your ego is the only thing guiding you, then rational thought concerning positive outcomes for everyone may not be the order of the day. Hence, why the balance of ego and leadership is so vitally important for followers, peers and all involved. When necessary slay your ego by taking the one down approach when dealing with others. Click To Tweet
Finding Leadership Balance
So, how do we balance ego and leadership? Let me share three things you can implement immediately.
(a) Make the commitment to do the work and discover your authentic-self or the person who shows up when no one else is around. In other words, keep it real with yourself about your beliefs, attitude, intentions and subsequent behavior. The more that you do the work, the greater likelihood you will find little chinks in your mental armor that need to be addressed.
(b) No matter how high you go in business, engage mentors and coaches for feedback and clarity for sustainable success. Even when it’s uncomfortable, feedback is a goldmine for your ability to grow and develop.
(c) When necessary slay your ego by taking the one down approach when dealing with others. in other words humble yourself and be ready to listen and learn from others. This will immediately allow you to gain clarity from others as you lead!
Point of Clarity Quote:
“Make your ego porous. Will is of little importance, complaining is nothing, fame is nothing. Openness, patience, receptivity, solitude is everything.” |
Dealing with Gender Issues in the Workplace
Men and women have had trouble communicating effectively since the beginning of time, and it's not just in the workplace. In fact, the differences between the genders have long been the topic of debate and the subject of many books. When it comes to the workplace, however, it's not important that you even try to understand the differences between the genders. That's an exhaustive subject that someone would probably never fully comprehend anyway. It is critical, though, that you learn the skills needed to work together in harmony, and practice effective communication.
Gender Issues in the Workplace
Sorry guys, but it's a fact. Although you might not like to hear that women are still discriminated against in the workplace, it's a fact. It's true that women now get positions formerly held by only men. And it's true that most men have respect for professional women in the workplace and no longer hold the "cave man" belief that women belong at home, raising kids and cooking meals. However, discrepancies between men and women – and some amount of discrimination – still exists.
Discrepancies in Pay
According to Harvard Independent, women still don't earn the same salaries as men for the same job status or position. In fact, a woman earns just 80 cents for every dollar earned by her male peer. This holds true even though more women than men hold bachelor's degrees, and women are enrolling in college more often than men. The differences in pay don't just occur early on in their careers either. Throughout their careers, women continue to earn less money than their male counterparts.
The reason for the differences in pay reflect societal and cultural views on family and children. Women typically get pregnant in the middle of a career. They take time off for maternity leave. Once they have a child, they don't work as many hours as their male counterparts because of sick children, activities, and other events that occur as a result of motherhood. In addition, they also tend to travel less frequently once they have a child. Because of this, they can get passed up for promotions.
That's not to say the difference in pay is due to the fact that women tend to not work as many hours after having a child as they do before. It's not that at all. Instead, it's that society expects women to take on a greater share of household and family responsibility. When a child is born, the mother does not get paid for maternity leave, so the father continues to work. After the child is born, it is the mother who is expected to take off for to care for the child more often than the father. That's not to say men don't take off time for their kids. They do. However, in most situations, it is the women – the mother. To an organization, this can appear as not putting their job status as a priority, as not being available for the time required for that new promotion, and a slew of other problems.
But what can we do about it? The answer is too complex to tackle in this course. However, it's important to understand the perceptions of the different genders in the workplace. Men are still viewed as the providers, as the ones that will work the long hours and do what it takes to get ahead for the better of their families. Women are still viewed as the ones responsible for household obligations and nurturing their children.The truth is, both genders value their careers and personal advancement. Women shatter glass ceilings every day – and it's not because they're just marking time until they start a family.
Men and women typically communicate in different ways, making it very easy for disagreements and misunderstandings to happen.
In addition, women are expected to be more demure. A woman who is aggressive can still be seen as a monster, as someone you don't want to be around or promote. However, a man who is aggressive is seen as powerful, and someone who will go far in his chosen career path.
Common Gender Stereotypes
Stereotypes cause a lot of misconceptions in the workplace. It doesn't matter if we're talking about gender, race, or color. As with any stereotype, gender stereotypes prevent effective communication between men and women. They can even create friction and discord, which lessens company morale and productivity.
Listed below are some common stereotypes about women in the workplace. Again, these are stereotypes. They also highlight the differences between the ways men are viewed in the workplace, as opposed to women.
- Women aren't as experienced in sports as men, so they can't be as good team players.
- Assertive women are trouble – or worse: feminazis.
- Women aren't committed to their work, because of family obligations.
- Women don't work well with other women, because they're catty.
- Women are the primary source of gossip in a workplace.
- Women are too emotional.
So far in this section, we've talked about how women can be negatively portrayed in the workplace, but they are not the only ones. Men can be unfairly portrayed, too. While the stereotypes pinned on the female gender can make a woman seem not as capable, devoted, or qualified, the stereotypes cast on men can make them seem like inhuman perverts, only out for their own success and satisfaction.
Here are a few of the stereotypes that are applied to the male gender in the workplace:
- Men are focused on their careers. Family takes second place.
- Men aren't emotional. In other words, they don't care about anyone's feelings.
- Men can't treat attractive female colleagues as equals, because they only view them as sex objects.
- Men will never see women as their equals in the workplace, because they don't want them to be.
- Men are all part of the "good ole boys" club and always help each other get promotions – over other women colleagues.
The truth is, men and women are in the workplace for the same reason: to advance their career and earn a living. How they choose to do so depends on many factors including education, culture, behavior, and goals – just to name a few. Even though the genders may communicate differently and do things a little differently at times, that doesn't mean that they're not equal and equally committed to the task at hand, their job, and their career. Applying a stereotype to either gender can only result in miscommunication, frustration, and discord in the workplace. Nobody gets ahead when that happens.
It's important to remember that we are all individuals. Even though men and women are two very different creatures, we're all still individuals. Applying a stereotype to anyone is a dangerous thing to do. Not only are stereotypes a bias, and inaccurate, they can also lead to a legal nightmare if stereotyping someone leads to discrimination. People in the workplace are professionals, and they all should behave as such in their own individual way.
Gender Roles in the Workplace
Both men and women want to get ahead in the workplace. That should go without saying. Whether you are male or female, there's little doubt that part of the reason you are taking this course right now is for the advancement of your career – either now or in the future.
Men and women are also equal in the workplace. That's not just a statement. That's the law. You cannot treat men different from women – or vice versa. While it seems like those laws may favor women at times, it also makes it possible for men to take paternity leave, use sick days to care for children, and other things that used to be female-only roles.
However, that doesn't mean that there aren't gender roles in the workplace that can affect the success of someone from a certain gender. Although the roles themselves aren't important to this course, understanding the traditional roles and the behavior of a colleague from the opposite gender may help you to understand their feelings and values -- therefore, creating respect.
- Female CEOs who are very vocal are seen as less competent than quieter ones.
- Women are viewed as better team players, since they're also viewed as supportive and rewarding.
- Women are persuasive, because they can read a situation and gather information from all sides.
- Women like a challenge. According to a study by Accenture, 70 percent of business women asked their boss for a challenge at work, compared to less than 50 percent of the business men that were polled.
- Women are honest, hard workers. According to polls by theFit, 54 percent of women worked nine to 11 hours a day. This is compared to 41 percent of men.
- Male CEOs who were quieter were seen as less competent than vocal ones.
- Men are early adopters of technology. An Accenture study found that men adopted technology earlier and relied on it more than women.
- Men ask for what they want. Research by Accenture shows that only 45 percent of women are willing to ask for a raise. Compare that to 61 percent of men.
- Men convey more confidence when they aren't prepared for a task or something else at work.
- Men make friends in high places and get more promotions. In a 2008 Catalyst survey on mentorship, 72 percent of the men received promotions by 2010, but only 65 percent of the women received promotions.
However, remember that the roles are not written in stone. There are women in the workplace who display more masculine behaviors and vice versa. A balance of masculine and feminine qualities has proven to be the strategy for success for individuals, teams, and organizations.
Communication Between the Genders
Effective communication between individuals, teams, or groups depends on a lot of factors. As we've discussed in this course, tone of voice, body language, communication style, and the words used all determine how effective communication is or isn't. Gender also plays a part in communication.
Men and women traditionally communicate in different ways. Each have different strengths and weaknesses when it comes to communication, and use different methods to communicate their thoughts, ideas, and feelings. Understanding these differences can lead to improved communication between the genders in the workplace.
In this section of the article, we're going to detail the strengths and weaknesses for both men and women when it comes to communicating. Keep in mind that not all the female strengths will apply to women and vice versa. However, these aren't stereotypes either. These are proven facts. After that, we'll suggest strategies for communicating with the opposite gender to help improve the effectiveness of communication between men and women in the workplace.
Women are great at reading body language and picking up other non-verbal cues when communicating with other people. They're also good listeners, and effective at showing empathy. However, these same strengths can also be weaknesses when they get too emotional, become too demure and not authoritative enough, or won't get to the point as quickly as needed.
Men exude a strong physical presence when communicating with others. The way they stand or carry themselves displays confidence and power, as does the body language they use. Men also tend to get to the point quickly. However, these strengths can also turn into weaknesses when they get too blunt. Men can be seen as insensitive to others and too confident in their thoughts, ideas, or selves.
Strategies for Effective Communication
To achieve effective communication between the genders in the workplace, we need to find a way to build the communication gap that exists. Below are strategies that we devised to help your verbal and non-verbal communications with a person of the opposite gender as effective as possible.
Communication Strategies for Women
Men and women communicate differently. If you're a woman, most likely you can relate to some of the traits that we've described as feminine communication styles. However, being aware of the conversation styles of your male counterparts will give you insight and help improve the effectiveness of communication.
When communicating, men:
- Value is on achieving results
- Asking for help is admitting lack of ability
- Focus on statistics
- Tell stories to "one up" the other person
- Want to solve the problem right away
When communicating with men, women need to get to the bottom line as quickly as possible. Avoid telling drawn out stories when you can. If you feel the need to tell a story, though, use gender neutral metaphors and analogies in your stories, such as metaphors and analogies about the weather, etc.
Women have to remember that men aren't going to talk until they have the information they need, so women should wait until a man is ready for discussion. When they do talk, it's time to observe and listen. Don't process what they say out loud.
Since women are more nurturing, it's natural for a woman to offer to help a co-worker or employee. However, to a man, that's viewed as a lack of confidence in his abilities. Women shouldn't be quick to offer advice. Instead, be willing to give the advice or assistance when asked.
- Share experiences to find common ground
- Build off each other's points
- Talk about problems and solve them together
- Processing is used as a way to build relationships
- Place emphasis on communication and feelings
- Offer assistance to be helpful, and because they care
When a woman tells a story during a discussion, she is trying to find common ground with the other people participating in the discussion. She's not trying to waste time or beat around the bush. Instead, she's trying to forge a relationship with you. When she processes what you say out loud, it's her attempt to include everyone and – again – forge a relationship. A woman also appreciates it if you offer to help, and you should offer. To her, it shows you are supportive.
- The Role of Respect in Sensitivity Training
- Why Your Company Needs Diversity and Sensitivity Training
- Sensitivity Training: LGBT Issues
- Sensitivity Training: Disabilities Issues
- How to Handle Cultural Differences in the Workplace
- Tricks to Improve Your Conversational Skills on the Fly
- Key Customer Service Strategies
- Tips to Relieve Social Anxiety
- Employment Law: An Example of Evaluating Performance
- Creating a Unique and Personal Brand
- Handling Telephone Contacts for Great Customer Service
- The Difference Between Marketing and Branding Strategies
- Becoming an Effective Consultant
- Personality and Emotional Intelligence
- The Use of Brand Extensions |
All eyes are on commercial space companies in the wake of the latest setback for Russia’s space programme, which has delayed the launch of the next crew to the International Space Station. A recent flight of a private rocket bodes well for the fledgling industry, but the coming weeks should reveal whether the industry can really take off.
Russia’s space agency Roscosmos reported last week that the Soyuz capsule meant to take astronauts to the station on 30 March sprang a leak when the air pressure inside it was accidentally pumped too high during a test. Another Soyuz capsule is being prepared for launch in its place but will not be ready to fly until 15 May.
It’s just the latest in a string of problems for Russian space vehicles. In August, for example, an uncrewed Soyuz rocket crashed to Earth. That temporarily threw the space station’s future into doubt because the same type of rocket is the only craft used to launch crews to the outpost.
NASA says it remains confident in Roscosmos’s ability to fly astronauts, but says the problems highlight the importance of developing other means of sending crews to the station. “The Soyuz is probably one of the most reliable systems out there, but when you have a spacecraft as significant as the ISS, it makes sense to get more than one capability to get humans [there],” Mike Suffredini, NASA’s space station manager, said in a teleconference last week.
NASA has previously estimated that commercial space taxis could be ready to carry astronauts in 2017. But the date will depend partly on how much money NASA can spend to help private companies develop their vehicles. NASA received $406 million for this purpose in 2012, but had asked for $850 million.
The White House will tip its hand about future spending priorities when it releases its proposed 2013 budget for federal agencies, including NASA, next Monday.
The companies already receiving NASA funding are also set to show their stuff. California-based SpaceX has been working towards launching a space capsule called Dragon on a mission to dock with the station. That launch will likely occur in early April, Suffredini said.
That Dragon capsule will be uncrewed, but SpaceX hopes to win a contract to fly astronauts to the station on later Dragon flights.
Orbital Sciences Corporation, based in Dulles, Virginia, which has a NASA contract to fly cargo to the station on a spacecraft called Cygnus, is farther from launch. It was scheduled to fly a demonstration mission to the station in April or May but will probably be delayed, Suffredini said: “We’re working on a number of options with them” for later flight dates.
However another private rocket company, Armadillo Aerospace, recently made its highest flight yet, flying its uncrewed STIG-A rocket just shy of the 100-kilometre boundary of space on 28 January.
The company hopes to reach space for the first time by mid-2012, and aims to develop a more powerful launcher to fly people on suborbital trips. The recent flight “tested many of the core technologies needed for the proposed manned reusable suborbital vehicle”, Neil Milburn of Armadillo said in a statement.
More on these topics: |
Bionergy generation reached a record high in the second quarter of the year, latest statistics from the Department for Energy and Climate Change (DECC) indicate.
Electricity produced in the UK using landfill gas, sewage gas, biodegradable municipal solid waste, plant biomass, animal biomass, anaerobic digestion and co-firing in the three months to June reached 5.2 terawatt-hours, up 58% compared to the same quarter in 2012 (the report is available here).
DECC said this was mainly due to the temporary return in operation of a large biomass power station in Tilbury, Essex, which had been closed after a fire in February 2012, and the launch of other two others at former coal-fired power stations at Ironbridge and Drax.
But the RWE Npower-owned plan in Tilbury, which claimed to be the biggest biomass in the world and providing 10% of the UK’s renewable power, was turned off again in August 2013 after the government refused to award it a renewable energy subsidy, according to The Guardian.
DECC figures also indicate that the amount of electricity generated with anaerobic digestion was up 35.9% year-on-year, while production with energy from waste decreased 6.3%.
Overall the renewables’ share of electricity generation increased from 9.7% in the second quarter of 2012 to a record high of 15.5% in the same period this year. |
Wednesday, January 12, 2011
Sunday, January 9, 2011
In order to effectively manage waste in a press, lets look at further classifications and then dwell on process and areas where we can monitor, control and manage waste..!
1. Tare waste (Wrapper waste)
2. Mat Waste (Tear sheet waste)
3. Reel end waste (Core waste)
4. Sweep Waste (Scrap waste)
5. Print Waste (Bad Copies waste)
Lets look at the costeliest yet controllable waste out of the aobe five categories which is the PRINT WASTE-
What are the factores that contribute most towards generating higher Print Waste?
Press Condition- A press which is ignored for prevantive and routine maintenance by the press crew will obviosly generate higher waste. Does the operators and maintenance staff understand the press well, have they been trained by the supplier? Normally the focus is on production and as long as the crew is able to deliver production the machine problems are ignored, deficiencies are patched up which subsequently result in break downs or excessive wear and tear of critical components directly effecting Print quality and generation of non sellable copies.
A good printer will understand that any deficiency in print quality may have originated from a dormant press problem which is being overlooked. Check for unhealthy signs in the printing operation before it suddenly surfaces and when it is too late...!
Normally the maintained scheduled times are considered as non productive , part of down time of the machines . this is a dangerous perspective . the management and print crew should recognize the print need for allocating adequate time for maintenance.
The machines maintenance. In short, is identifications and eliminations of each and every factor that limits quality, reduces speed lengthened make ready time and these factor are being the waste factors , there are simple strategies to be adopted , while attending the press.
Posted by Yogesh Kharbanda at 1:59 PM |
For the past ten years BP and the Next Generation Challenge Committee (NGCC), a team of BP’s Upstream graduates have been inspiring and teaching the younger generation about STEM through volunteering and events such as Careers Awareness Week and Young Scientist Day.
This year’s flagship event, the Young Scientist Day, saw 14 teams of 13-14 year olds from local schools compete in a unique challenge. The youngsters were tasked with a technical challenge to design and build oil platforms using core business insight in an attempt to win the coveted prize of Young Scientist Day champion.
The aim of the day is to teach the teenagers about BP and the energy industry, whilst also helping them to develop their creativity, problem solving skills and basic business knowledge. The graduates and interns that supported the event also develop their communication skills through mentoring and they have the opportunity to have fun inspiring school pupils with the wealth of knowledge that a career in STEM subjects can bring.
Each of the teams were assigned a BP graduate or intern as a mentor during the event. They had not seen the challenge brief before arriving at BP’s International Centre for Business and Technology (ICBT) in Sunbury for the event to ensure that no team was given a head-start. The objective of the challenge was for each team to generate as much profit as possible by constructing and selling miniature offshore platforms, while minimising the material costs required for construction by optimising their design.
Teams were briefed by their mentors on the Capital Value Process, BP's project management process used to execute a project. The teams then had to develop a platform design in line with set criteria and build up to three platforms using the range of high performance materials available for them to purchase: straws, plastic cups, napkins, card and wooden stirrers.
Each constructed platform was tested by members of the NGCC, Basel Razouk, subsea engineer graduate, David Smith, applied geophysics graduate and Emilie Lunddahl, process engineer graduate. They tested against the original design criteria and awarded a value based on its performance. Rigorous tests were conducted to assess the platform size, strength and stability. Each team received a value of $1M for every cm2 of platform surface area, $100M for being able to hold a table tennis ball on the top without it rolling off, $100M for holding the weight of one 500ml water bottle, $200M for holding the weight of two bottles, $350M for three and $500M for four and finally $100M for a height of 12cm, $200M for 16cm and $400M for 20cm.
The event was a huge success with all teams thoroughly enjoying the challenge and creating a variety of innovative platform designs. Some teams opted to create two low performance, cost effective platforms whilst other teams chose to construct a single, high performance platform.
Congratulations to Thamesmead School in Shepperton who were awarded the title of 2017 Young Scientist Day Champions by achieving the near possible goal of low cost and high performance through a combination of inventive designs and a shrewd procurement strategy.
Congratulations also go to the runner up Reading School and Matthew Arnold School, who were awarded a prize for the lowest cost design.
Well done to everyone that entered the competition. BP look forward to welcoming the next set of budding engineers in Summer 2018. |
Written by Ana Canteli on 4 May 2018
It is not easy to define what digital transformation is. In fact, many organizations believe they are implementing it when what they are really doing is carrying out optimization projects since they are actually improving the business models that already exist.
It is true that digital business transformation identifies outdated business processes and replaces the legacy technology of companies with new business and technology. However, this is only an element that is part of the concept, which does not define it in its total scope. In fact, this is something that also happens with digitization. Digitization is in many cases, a necessary step for digital initiatives to take place; but they are not synonyms.
According to George Westerman, MIT Sloan researcher in the digital economy, digital transformation - or DX economy - occurs when companies use technology innovations to radically change their performance in the same sector.
In its most generic meaning, digital business transformation means using technology to improve - and yes, also to radically change - business processes, enhance the customer experiences; focusing on the points in which the expectations of the client and the business are located, to detect and develop new possibilities, at the same time that technology is used intensively to achieve said objectives.
Digital strategies in digital business transformation involve the integration of digital technologies in all areas of the business; so that it substantially changes the way in which people operate in the company while providing value to the customer.
Digital strategies in digital business transformation are also the change produced by the use of digital innovations in all aspects of business and society. The digital transformation framework is a human issue, as it requires a cultural change, which constantly challenges the status quo of the organization that has to experiment, get used to the changes and also learn from the failures in this area. Because digital transformation is a project that has a multitude of intermediate and interconnected objectives in the foothills of continuous optimization through processes, departments and ecosystems of the company in the era of hyperconnectivity; where creating the necessary bridges to induce, create or maintain digital transformation is the key to success.
To begin with, the digital transformation supposes a radical change in the concept of the use of technology, as has been done up to now in the company. It can provide renewed sources of income and change the business model.
However, achieving this requires a deep interdepartmental collaboration to apply the business-centred philosophy and the rapid assimilation of the new business models, simultaneously.
Digital transformation really means the radical change of the business and activities of the organization; business processes, competencies and business models that allow the full development of changes and opportunities in the field of digital technology, in conjunction with the accelerated effect that this is causing in society. In turn, this must be managed in an organized manner, taking into account current and future changes.
For this, it is essential to have digital transformation strategies, which allow applying the changes that are necessary, while taking into account the causes that lead the company to implement this transformation. The digital transformation can be triggered by various causes, which can even occur simultaneously at different levels; expectations and consumer behavior, economic reality, generational change and the emergence of disruptive technology (for example, the accelerated adoption of innovations that change the sector considerably).
Technology is only part of the equation of digital transformation. The technological evolution and the technologies that go from the cloud computing, big data, data analytics, artificial intelligence, mobile application to the internet of things (IoT)*or blockchain technology, are catalysts of the digital transformation or cause of the need - among others, since they condition the behavior of consumers or completely redefine the sector - or an accelerator of that innovation or transformation.
After all, the objectives of the organization, clients, and stakeholders decide the agenda. The main role of any organization is to unite the points and overcome the internal difficulties in all areas to complete their objectives; thus the elimination of silos becomes the norm. This change happens at a technological, cultural, business and organizational level.
The holistic vision of digital transformation will allow companies to acquire the core of the skills they need to succeed in the changing environment in which they operate, and in which such changes occur faster and faster. The dynamization of the changes has resulted in a phenomenon that pivots between technological acceleration and disruptive technology that challenges the status quo of the traditional business model, opting for the rapid reaction to changes in the customer behavior and demands of interested parties, through the chain of the supply chain.
It is necessary to have the leadership to achieve the application of this holistic approach. This quality will make possible the implementation of the digital transformation strategies, regardless of how the organization is; do not think that this can only happen in technology companies or companies with "new" organizational models. Regardless of how it is drawn in the holistic approach, the overcoming of the silos and the differences between perception and reality will prevail. In practice, the application of the digital transformation strategies occurs in pilot projects from the bottom up, tailored or in specific departments.
This process does not happen overnight. There are many components and a multitude of intermediate objectives. The digital transformation takes place in incremental stages and according to the degree of digital maturity of the company.
Digital transformation is an issue that is on the table of many organizations. To obtain the benefits, it is essential to focus on the challenges faced by a real business (and its consumers) and to have a clear and organized plan to prioritize and involve all actors in the digital transformation process.
Digital transformation is vital to customer loyalty, as it resolves the difference between what customers expect from their digital experience and what companies offer them in reality.
It is an element that well applied integration of digital technology can provide satisfaction to consumers, but also enhance the user experience (internal customers of digital transformation), since people, in general, have long adopted digital practices in other facets of Their lives, from buying on the network through the mobile devices to the management of home automation.
In other words: digital businesses are experiencing a decentralization of the focus of attention of the final customer of their products or services, to the simultaneous coverage of the ecosystem of the organization. The client in a broad sense is the fundamental integrant of the equation in which the experience of the final client, the satisfaction of the worker, the valuation of the interested parties or associated with components of this approach.
It is one of the vertical sectors that is changing the most and which is considered as the precursor of technological advances that helps to maintain a correlation with the evolution of consumer needs 24/7.
Digital transformation is omnipresent in all aspects of this sector, from data management to information technology and optimization, through digitization of the supply chain, delivery process, to the relationship between the administrative part and the attention to the client, where consumer expectations are measured with the needs of transformation in an environment that seems neutral as to the origin of technological initiatives.
With the help of services in the cloud and big data, retail businesses enjoy a wide range of technologies that are completely changing the image of the sector. In this aspect, analysts see a prominent role in the Internet of Things, in particular, related to digital signature services or cross-platform scenarios. Obviously, these cases are closely associated with the new technologies mentioned.
Digital transformation provides the possibility of matching the business to the constant change in technology that occurs in the industry, which will allow companies in the sector to focus their efforts on what really matters to these organizations: the empowerment of employees who will count with the necessary tools to quickly meet the needs and expectations of the client and thus transform the products and services they offer.
Under the umbrella of the industry 4.0 or industry of the internet, the digital transformation in the sector is advancing at different speeds with the integration or convergence of information technologies and operational technology, as a fundamental element for the improvement of efficiency and speed. Information technologies and operational technology are used to control events, processes, and resources to carry out the necessary adjustments in the organization.
The slow technological evolution of the manufacturing sector is offset by the speed of adoption in this sector of the Internet of Things. The implementation of cyber-physical systems and innovative technological systems and services allows companies in the sector to identify and define the obstacles that arise. To be successful in Industry 4.0, it is necessary to combine a vision of the changes that digital transformation implies with the challenges and technological evolutions that will have to be faced, in conjunction with the effects that this has on the human resources of the company.
The application of digital transformation differs from country to country and also depending on whether we are talking about a public institution at the national, local, or organizational typology to which we refer.
From the point of view of the citizen, the role of digital transformation is very clear in areas such as e-government or personal identification programs. For public administrations, the implementation of digital transformation is carried out as a means of reducing costs; in an increasingly ageing society and in which coordination of institutions at the local, regional and national level and transparency are objectives that must be met using the available resources as efficiently as possible.
Another factor to consider is the search for citizen satisfaction in the digital age, improving the digital experience of the taxpayer. In the information society, it is increasingly essential to meet the expectations of a population more familiar with the use and consumption of mobile services, which are less willing to participate in paper-based processes. Overcoming and eliminating these frustrations is critical in today's society.
Digital transformation in the field of medical care is fostered by a growing and ageing population that presents its own challenges, such as the rise of chronic diseases, the increase in costs and changes in people's expectations and behavior.
A tangible example is the change in the behavior of workers in the sector, who use technological devices every day, such as computers, tablets or mobiles for the development of their functions and in the provision of services, making health care more focused on information.
There are not many sectors such as logistics where the interconnectivity of organizations, ecosystems, processes, information flows, products, and distribution are present and have such an important role in the business.
In the current context of globalization with the constant change in customer expectations, the growing pressure on profit margins, the risk on large volumes of information, the logistics and transport sector is continually fluctuating. In this scenario, coordination in the transport chain, speed, visibility, digitization and digital transformation are among the priorities of companies in the sector.
Historically, digital transformation has been seen as the digitization of information; that is, transferring it from paper format to digital format. This step is obviously necessary since the digital transformation process needs various elements to be successful. Here we will also highlight some of them.
The application of digital transformation in the company also means an opportunity to make the necessary changes that affect at the same time various groups, divisions, processes, and technologies already implemented. Moreover, it is not just an opportunity to analyze what can be improved and what can be reframed to make it better.
Knowing the role of data and analytical activities in digital transformation, there are even more opportunities to change management.
For example: when web analytics apps became a trend, it soon became clear that it was necessary to make changes in theoretically unrelated areas such as customer service or marketing, also revealing the existence of silos in many aspects related operations focused on customer service.
However, in any case, the importance of changing management is based on human capital: the internal customer (user/employee) partners, stakeholders and the general ecosystem in which the company moves.
The debate about who has to take charge of the project of digital transformation in the organization is older even than the minting of the term. It is possible that is why positions such as director of information systems, director of the digital transformation or analogous figures have been created, which most likely have a role to play in digital transformation goal of global reach and transversal application.
Managers in the digital age have to be informed of what others do (competition, other departments, associates, interest groups of the company) and what are the experiences, methods, and skills that others have or are developing.
All digital businesses are different, and there is not a single solution that is a response that works for everyone. However, there are some valid premises.
In the digital age, it is important to recognize that we are in a state of constant revolution. Today, we can no longer make a technological change and expect it to work for the next five years.
To begin, we must bear in mind that the role of management in digital businesses is not the same as it can play in a company with a traditional structure. One of the challenges facing current managers is that they are not fully prepared to face the challenge of digital transformation and maintain and motivate staff throughout the transformation process.
In order to make digital transformation a reality, it is necessary to enjoy a complete alignment of the organization to that end. Without the coverage that the directive must offer and without the support of associates or interest groups that participate in the process, it is very challenging to succeed in the project.
However, one thing that can be made clear is that the best companies - those that manage to implement the process of digital transformation - are those that enjoy strong leadership and that turn technology into a transforming force.
To understand what digital transformation means, it is essential to prioritize people and processes over technology. Because the digital transformation can be seen as a trojan horse that comes to disrupt the organization as it is established and understood by its members. The digital transformation must be seen as a renewing force, as an opportunity to rethink and improve the top-down processes, and with them the talent, the organizational structure, the business model, the products, services, etc. Some of these changes will be easier than others.
In this context, the power of words is very important; something that is often underestimated or not taken into account. The language that is used internally in the company - about products or opportunities, and how it moves abroad (customers, or consumers) can have a powerful impact on the management of the business and the results it provides. For example, internally, acronyms are often used to talk about products that are well known to the staff, and this vocabulary often goes to the organization and is used with clients, who may feel annoyed or inferior. When they contact the company that speaks to them in a cryptic language, this is a phenomenon that can occur internally, constituting a communication problem between departments.
The OpenKM document management system can play a crucial role in the company's digital transformation projects.
The functionalities of OpenKM can be used to optimize processes, from the automation facilities to the workflow engine included in the document management software, through the OCR engine that streamlines and implements the scanning processes if necessary. Not in vain, the OpenKM program suite offers a scanner client that allows the integration of the business content system with the scanner used in the company.
The communication tools included in the program contribute to improving coordination between departments and areas, making communication between staff and public more fluid. OpenKM can be used to manage the company's social media.
The complete search engine inserted in OpenKM allows quality information about featured insights or relevant enterprise content to be easily obtained.
On the other hand, the OpenKM suite offers the necessary resources so that the organization can complete or expand the services of the platform. From digital signature clients, through the use of MS add-ins for staff, desktop sync and email archives to facilities designed for technical staff, such as a complete API, SDKs for Java, PHP and .NET - that allow the free integration of the manager in the company's technological platform - or the import station. These are just some examples of the facilities offered by the OpenKM document manager to meet the present and future needs of the organization and thus convert the acquisition of the document manager into a strategic plan decision and amortized in the long term.
Open Document Management System S.L.
Europe Spain: Please call +34 605 074 544.
Monday - Friday: 09:00 am - 14:00 pm, 16:00 pm- 19:00 pm CEST for immediate assistance. Currently, it is Saturday 00:00 am in Palma de Mallorca, Spain.
Europe United Kingdom: Please call +44 774 330 6997.
Monday - Friday: 09:00 am - 12:00 pm, 14:00 - 18:00 pm CEST for immediate assistance. Currently, it is Friday 23:00 pm in Cranfield, Bedfordshire, United Kingdom.
USA: Please call +1 646 206 6071.
Monday - Friday: 08:00 am - 17:00 pm EST for immediate assistance. Currently, it is Friday 18:00 pm in New York, USA. |
Job satisfaction may be measured for a variety of reasons. For example, a company may measure job satisfaction over time to assess trends in employee attitudes or reactions to a new policy or organizational intervention. Assessing job satisfaction might also serve a diagnostic purpose, identifying those aspects of the job with which employees are dissatisfied. As a last example, companies might measure job satisfaction to predict other important attitudes or behaviors (e.g., job turnover). In all instances, a useful measure is important.
What makes a measure of job satisfaction useful?
Is It a Good Measure?
Good measures are reliable (i.e., levels of job satisfaction that are in fact consistent over time demonstrate similar satisfaction scores), valid (i.e., the measure provides a pure measure of job satisfaction), discriminating (i.e., the measure of job satisfaction is equally sensitive to low and high reported levels), and comparable (i.e., the measure allows you to compare job satisfaction scores across groups). Developing a good measure requires significant expertise and resources and should be undertaken by individuals with strong backgrounds in psychometrics and statistics. The unfortunately common strategy of writing a few items and assuming they provide a measure of job satisfaction is inappropriate. Without evidence of quality, homegrown measures may yield erroneous interpretations and conclusions.
Is the Measure Appropriate for Your Purposes?
Multiple good measures of job satisfaction are available, so the choice depends in part on purpose. For example, is the measure of job satisfaction easy to administer, score, and interpret? Does it support the types of interpretations needed (e.g., overall job satisfaction versus different areas or facets of job satisfaction)? Is the reading level appropriate? Is the measure available in different languages so that organizations can assess satisfaction in the first languages of employees throughout the world? Finally, how much does it cost? Answers to these questions will be very helpful in selecting the best possible measure of job satisfaction for the purpose at hand.
Variations in Measures of Job Satisfaction
Quantitative Versus Qualitative Measures
Quantitative measures of job satisfaction, based on numerical ratings assigned to closed-ended response items, are by far the most commonly used types of measures (and are preferred, given the characteristics of a good measure identified above). Structured interviews, content coding of open-ended response items, and other qualitative measures of job satisfaction offer an enriched interpretation of findings obtained from quantitative measures. They are not recommended in place of quantitative measures, because they do not lend themselves to drawing comparisons across groups of employees or organizations.
Overall versus Facet
Given the different purposes for measuring job satisfaction, both overall and facet measures have been developed. Overall measures provide a global assessment of job satisfaction and may require the summation of several general items, the summation of items measuring a broad set of facet areas of satisfaction, or both. Facet measures focus on the assessment of satisfaction with different aspects of the job, which typically include dimensions such as supervision, pay, coworkers, and the work itself. Unlike an overall rating, facet measures yield a diagnostic profile of satisfaction so that one may identify particular areas that might be high or low.
Single versus Multiple Item Measures
It is appealing to think that a well-written single item will be a good measure of overall job satisfaction (e.g., “Overall, I am satisfied with my job”) or different facets of job satisfaction (e.g., “My level of pay fails to meet my needs and expectations”). They would be short and easy to complete, score, and interpret. Unfortunately, they typically have low reliability and validity. Reviews of published measures of job satisfaction (see References:) commonly include multiple items.
General versus Occupation-Specific Measures
Most measures of job satisfaction are developed for use across occupations. These general measures are useful for most organizations. However, measures of satisfaction have been developed for specific employee populations (e.g., nurses, human service employees). Although such measures may be more sensitive to the particular issues of a profession or job grouping, they are not available for many occupations and prohibit cross-occupational comparisons.
Locating Measures of Job Satisfaction
Mental Measurement Yearbook
The Mental Measurement Yearbook (MMY) is a serial publication available in most libraries and provides a somewhat comprehensive listing of a broad range of tests and measures. The MMY solicits external reviews by established researchers who critically evaluate new measures. However, with its broad range, it does not provide an all-inclusive listing of established measures of job satisfaction.
Compendia of Satisfaction Measures
There are compendia of job attitude measures, a number of which are included in the References: section at the end of this entry. Although some are dated and may not include recently developed measures of job satisfaction, compendia often provide summaries and recommendations that may help one choose among the many published measures.
A large number of test publishers market measures of job satisfaction that were developed in-house by the publishers’ professional staffs or that provide the marketing support for measures of job satisfaction developed by others. Unfortunately, there is no easy way to identify test publishers who specialize in measures of job satisfaction.
World Wide Web
Currently, Internet search engines can be used to locate Web pages that may provide information about measures of job satisfaction. Also, the electronic database Psyc lNFO includes more than 1,900 behavioral science journals. Unfortunately, current features of the search interface make it challenging to discriminate between articles about measures of job satisfaction and those that simply measure the construct.
Exemplar Measures of Job Satisfaction
Although a large number of measures of job satisfaction are available, and some may be more relevant given the specific purpose, a few measures are discussed here based on their excellent reputations as well-designed and useful.
The Faces Scale, developed in the 1950s, measures overall satisfaction using a single, nonverbal item. Eleven faces appear along a continuum from a broad smile to a deep scowl, and respondents are asked to circle the face that best describes their overall job satisfaction. Despite the admonishments earlier in this discussion against using single-item measures, the Faces Scale has been shown to be a remarkably good measure of satisfaction with the job overall. It is simple to administer and score. It is unclear whether it is effective in cross-cultural situations. It can be administered across a broad range of employees, although it may be less accepted by midlevel management or above. Overall, the Faces Scale is a quick and simple measure of overall job satisfaction.
Minnesota Satisfaction Questionnaire
The 20-item short form version of the Minnesota Satisfaction Questionnaire (MSQ) was developed in the 1960s to provide a comprehensive assessment of general job satisfaction. Each of the 20 items starts with a common stem (“On my present job, this is how I feel about:”) and taps into some specific aspects of the job (e.g., “… Being able to keep busy all the time”; ” …The working conditions”). Each item is scored on a five-point very dissatisfied- very satisfied scale and summed in an unweighted fashion for an overall measure of satisfaction. Item subsets can also be summed to provide scores on intrinsic and extrinsic satisfaction, but recent research questions the quality of these two submeasures. Decades of accumulated research suggest that the MSQ provides a good measure of overall satisfaction.
Job Diagnostic Survey
The Job Diagnostic Survey (JDS) measures job characteristics but also includes a five-item measure of overall job satisfaction. The items include positively worded statements (e.g., “Generally speaking, I am very satisfied with this job”) as well as reverse-scored items (e.g., “I frequently think about quitting this job”). The items are scored on a seven-point disagree strongly-agree strongly scale and are summed in an unweighted fashion for an overall measure of satisfaction. The JDS job satisfaction scale is easy to administer and score and has been found to provide a good assessment of overall job satisfaction. However, two items focus on quitting, a related but different concept. Therefore, it may not be a pure measure of job satisfaction.
Facet-Specific Job Satisfaction
The Facet-Specific Job Satisfaction (F-SJS) measure includes 33 items to measure six distinct features of the job: comfort (e.g., “The hours are good”), challenge (e.g., “The work is interesting”), financial rewards (e.g., “The pay is good”), relations with coworkers(e.g., “The people I work with are friendly”), resource adequacy (e.g., “My responsibilities are clearly defined”), and promotions (e.g., “Promotions are handled fairly”). Responses are scored using a four-point very true-not at all true scale, providing six distinct scale scores; the items can also be summed to provide an overall measure of job satisfaction.
Job Satisfaction Survey
Originally developed for use in human service organizations, the Job Satisfaction Survey (JSS) includes 36 items that are scored on a six-point disagree strongly-agree strongly scale. Scored items are summed in an unweighted fashion for an overall measure of satisfaction. There are also nine facet scores: pay (e.g., “I feel I am being paid a fair amount for the work I do”), promotion (e.g., “I am satisfied with my chances for promotion”), supervision (e.g., “My supervisor is unfair to me”), fringe benefits (e.g., “I am not satisfied with the benefits I receive”), contingent rewards (e.g., “When I do a good job, I receive the recognition for it that I should receive”), operating procedures (e.g., “I have too much paperwork”), coworkers (e.g., “I enjoy my coworkers”), nature of work (e.g., “I feel a sense of pride in doing my job”), and communication (e.g., “Communications seem good within this organization”). Score distributions from previously surveyed employees (primarily from public-sector and medical/ mental health organizations) are available online for comparison purposes.
Job Descriptive Index/Job in General
The Job Descriptive Index (JDI), first published in 1969 and revised in 1985 and 1992, is commonly cited as the most carefully developed and most frequently used measure of job satisfaction. It has been translated into a variety of languages, and national norms have been developed (and are regularly updated) to allow both within- and cross-organization comparisons. The JDI measures five facet areas of satisfaction that have been identified as important across many organizations: work itself, pay, opportunities for promotion, supervision, and the people with whom one works. The scale includes a total of 72 adjectives or short phrases, and respondents are asked to mark a “Y” (Yes, it describes my job), an “N” (No, it does not describe my job), or “?” (Cannot decide). The Job in General (JIG) measure was developed in 1989 to provide a complementary measure of overall job satisfaction to the JDI. The JIG includes 18 items, using the same item design and response format as the JDI. The JDI and JIG can be completed by individuals with a third-grade or higher reading level and together take no more than 15 minutes to complete. More recently, abridged versions of the JDI and JIG have been developed in response to the desire for shorter measures that still include a broader range of scales and items. The Abridged Job Descriptive Index (AJDI) contains a total of 25 items; the Abridged Job in General (AJIG) measure contains 10 items. Efforts are under way to offer online administration, scoring, interpretation, and report writing that are completely automated, a service that may be particularly helpful for midsized organizations that lack the expertise to do their own survey work.
- Balzer, W. K., Kihm, J. A., Smith, P. C., Irwin, J. L., Bachiochi, P. D., Robie, C., et al. (1997). Users’ manual for the Job Descriptive Index (JDI; 1997 Revision) and the Job in General (JIG) scales. Bowling Green, OH: Bowling Green State University.
- Feild, H. S., Childress, G. B., & Bedeian, A. G. (1996). Locating measures used in I/O psychology: A resource guide. The Industrial-Organizational Psychologist, 34, 103-107.
- Fields, D. L. (2002). Taking the measure of work: A guide to validated scales for organizational research and diagnosis. Thousand Oaks, CA: Sage.
- Kunin, T. (1955). The construction of a new type of measure (Faces Scale). Personnel Psychology, 8, 65-78.
- Russell, S. S., Spitzmuller, C., Lin, L. F., Stanton, J. M., Smith, P. C., & Ironson, G. H. (2004). Shorter can also be better: The abridged Job in General scale. Educational and Psychological Measurement, 64, 878-893.
- Spector, P. E. (1985). Measurement of human service staff satisfaction: Development of the Job Satisfaction Survey. American Journal of Community Psychology, 13, 693-712.
- Stanton, J. M., Sinar, E. F., Balzer, W. K., Julian, A. L., Thoresen, P., Aziz, S., et al. (2002). Development of a compact measure of job satisfaction: The abridged Job Descriptive Index. Educational and Psychological Measurement, 62, 173-191. |
Bhubaneshwar: As a step towards achieving environmental excellence, Tata Steel`s Sukinda Chromite Mine in the state of Odisha has set-up state-of-the-art Effluent Treatment Plant (ETP) of capacity 108 million litres/day that treats both surface run off and mine water to almost drinking water specifications. This is the biggest ETP in the region and the largest single location ETP in India.
Tata Steel operates one of India’s largest chromite mines at the Sukinda Valley in Odisha producing chrome ore which is subsequently converted it to Ferro Chrome and sold to customers across the world. A large quantity of water, generated during mining and due to rainfall, needs to be handled during the mining operations. The Sukinda Valley experiences about 110 cm to 180 cm of rainfall annually, of which 80% of the rainfall is during the monsoon season i.e. between June and September. Water coming in contact with chromium ore preferentially leaches out soluble hexavalent chromium from the ore body. As a result, water from the mine contains 0.2-4 mg/l of hexavalent chromium against a safe limit of 0.05 mg/l for human consumption; requiring all water to be treated before its release from the mines.
The ETP in Sukinda Chromite Mine has 24/7 real-time monitoring of the input raw effluent and output treated water for hexavalent chromium, pH and Total Suspended Solids through online monitors installed at both input (raw effluent) and output (treated water).The treated water in the ETP is recycled as an input to the Water Treatment Plant for drinking water purpose. This is not only resulting in water conservation but is also a step towards being water neutral. |
Packaging “JTG – 2” central news agency reported Thursday, north Korean researchers developed an immunity enhancement drugs, can be used for the prevention and treatment of malignant infectious diseases such as respiratory syndrome (MERS) in the Middle East. Drug called “JTG – 2”, is a kind of injection, ingredients including ginseng extract and rare earth elements, by north Korea’s rich and powerful pharmaceutical researchers developed society. To this, the yonhap news agency, the outside world is unable to confirm north Korea’s claim. However, given that north Korea’s health care system is weak, the idea that reliability is questionable.
New drug is 1996
According to the report, drug called “JTG – 2”, is a kind of injection, ingredients including ginseng extract and rare earth elements, researchers developed by north Korea’s rich and powerful pharmaceutical company. From the Korean central news agency published photos, this kind of drug packaging for gold, emblazoned with red marks. Drug carton eight, in glass bottles.
According to north Korea’s “national communication” website, researchers have developed as early as 1996, a similar drug, the drug is previous “upgrade”. North Korea’s rich and powerful pharmaceutical society director, the doctor said: all great “as a powerful immune stimulation regulator, the injection has been identified to prevent different kinds of malignant epidemic.”
In view of the central news agency reported this, yonhap news agency, the outside world is unable to confirm north Korea’s claim. However, given that north Korea’s health care system is weak, the idea that reliability is questionable.
MERS virus outbreak in South Korea, the north Korean government also attaches great importance to the epidemic prevention and control. Earlier this month, at the request of the north, the south Korean kaesong industrial park in north Korea launched three temperature testing equipment, testing whether attending the symptoms of fever, to prevent the spread of the virus to north Korea.
And is for the crystallization of the rare earth elements of all great kcna news agency said: “researchers will inject ginseng of trace rare earth elements, such, rare earth can be combined and sugar composition of ginseng, ginseng by into a complex ideal. Injection is extracted from the complex elements.” “SARS, ebola and MERS and other infectious diseases are associated with low immunity, so ‘JTG – 2 injection as a potent immune activation agent can cure these diseases easily.” All schoenberg said.
However, the north Korean media did not mention this potion which kind of, is the rare earth elements in also did not say whether the agent has for export. But, foreign media reported “JTG – 2” has been in Moscow by north Korea to buy a dealerships.
Due to the weak medical system, the half-year, north Korea has taken various measures to reduce the number of foreign tourists, such as to prevent them from the ebola virus into north Korea. In the first half of this year held in Pyongyang marathon also nearly so cancelled.
Expert analysis, although not ebola outbreak near north Korea, but north Korea remains committed to research and development of new drugs, a big reason is that in order to resist tuberculosis, respiratory infections, etc. The main diseases affecting the north Korean people’s health. In 2006 and 2013 malignant in the global outbreak of bird flu, north Korea has launched a similar drug.
The language and English report
Content of different style
The news reports have both Korean and English version. English articles used as propaganda, just mentioned the injection is rich and strong of pharmaceutical research and development of new society by north Korea is a kind of “strong immune agent” resurrection, language relatively restrained and conservative. The Korean version because it is internal to the people, the language style and foreign draft is different. The article firstly introduces the MERS of South Korea and the world and the ebola virus, said these are likely to affect “immunocompromised populations” “malignant virus”, and “JTG – 2” injection can effectively enhance human immunity, so effective for treatment of this two kind of virus.
The article also argues that the injection of some infectious diseases epidemic such as ordinary flu, SARS, bird flu and HIV/AIDS treatment and prevention effect. Shot “JTG – 2”, the article says the north koreans have been to the spread of disease area, has proved to be ok but come back, I can testify that this injection has a protective effect for these disease.
Kcna said in recent years in many American states have malignant cases of influenza spread, hundreds of people were killed and “JTG – 2” for these disease also have therapeutic effect. Finally suggested multinational take preventive measures, to the people this injection, MERS at the same time, the malignant infectious disease early cold symptoms such as ebola, before to other treatments to patients should be injected with the “strong immune agent” resurrection. “This is very be necessary, international experts have confirmed should do.” Central news agency reported. Comprehensive xinhua news agency said
Korea for the first time without MERS new cases, death
According to June 20 (xinhua) south Korean health welfare department, 20, said South Korea’s Middle East outbreak of respiratory syndrome (MERS) no new confirmed cases of infection, and no additional deaths on the day. This is South Korea since confirmed the first cases May 20th, first appeared at the same time there is no new cases and deaths.
South Korean health welfare department says that by the end of 20th morning 6, South Korea’s MERS at diagnosis for the same as the previous day, for 166 people, including 77 hospital patients, families, and visitors 59, hospital related professionals 30. At the same time, death cases did not increase, is still 24 people, including 22 is older patients or patients with chronic diseases such as cancer, heart disease, lung disease, they belong to the high risk population of infected respiratory syndrome in the Middle East. In addition, a full recovery after hospital discharge of patients treated by new 6 people, a total of 36 people. The rest of the 106 is still under treatment in patients, 15 were in unstable condition.
According to south Korean health welfare department figures, since the first confirmed cases, South Korea, there are more than 12000 people may be infected and quarantined. Among them, about 7400 people did not show any symptoms has ended segregation. As of 20, 5197 people are still in isolation, are suspected of inspection of 174 people. As of 19 afternoon 3 when, South Korea still has 108 schools were closed because of the outbreak.
19, the south Korean report only 1 case of new cases, is the lowest number since June 3. Two consecutive days, some people think that new cases lower, perhaps indicates that the outbreak is nearing an end. South Korean emergency control center of the ministry of health welfare, an official said the number of patients with a modest increase “let us believe that the spread of the disease are now falling”.
However, because of MERS virus incubation period of 14 days, some south Korean health welfare department officials warned, the number of patients may begin to increase again, because still have one thousand people who are isolated. South Korea’s President, said park geun-hye 19, respiratory syndrome virus will continue to strengthen measures to deal with the Middle East, until the deadly virus has been completely eradicated. She promised at the same time, South Korea will establish a complete system, in order to better cope with some emerging infectious diseases. |
What You Need to Know About Thin-Film Coating
What are Physical and Chemical Vapor Deposition (PVD and CVD)?
PVD thin-film coating is a process in which a solid material, often a metal, is vaporized in a vacuum and deposited, atom-by-atom, onto the surface of a part. This material may be combined with nitrogen, oxygen, or a carbon-containing gas to form compound materials. This process forms a thin, bonded, metal or metal-ceramic layer on your part or product’s surface that greatly improves its appearance, durability, and/or function. The deposition process can be easily customized to change the color, durability, or other characteristics of a coating.
PE-CVD (plasma enhanced chemical vapor deposition) thin-film coating is a similar process in which the atoms in a gas are energized and deposited on a surface. VT Diamond™ coatings (DLC – diamond-like carbon) are an example of a thin-film coating deposited using a PE-CVD process.
How do VaporTech Thin-Film Deposition Systems Work?
PVD and PE-CVD coatings are deposited using a thin-film deposition system. VaporTech systems consist of a vacuum chamber, pumping system, and power supplies that drive the deposition process. A batch of parts are loaded into the vacuum chamber and coated in a fully automated process. Once a thin-film coating is applied, parts may go directly to assembly or packaging. Systems are available in multiple sizes to coat small or large volumes of different size parts (up to 1.2 meters or 48 inches). Both metal and plastic parts can be coated using unique VaporTech low-temperature thin-film processes.
What is PVD coating? How does it work? What are the benefits? Answers to FAQs.
How PVD and CVD can help your business
Read and download detailed information about coatings, equipment, and more.
Easily integrate deposition systems into your existing processes
Our scientists work closely with you to create coatings customized for your specifications.
We provide onsite installation and train your staff.
To keep your equipment running for years to come, we provide maintenance and support.
Where do you want to go next?
Get answers from a VaporTech specialist. |
Essay PreviewMore ↓
The role of managerial accounting is increasing. These managers have to be able to increase effectively the involvement and size of organizations. These business managers also have to be aware of the rapid growth and enactment of technology. Managers also have to be familiar with the regulatory environment, be able to contend successfully globally and have an increase importance on excellence.
When examining the major differences between financial and managerial accounting, we find that with financial accounting the information is reported in statements. The financial statements objectively and periodically report the results of past operations and the financial condition of the business according to the Generally Accepted Accounting Principles (GAAP) (Vallabhaneni, 2003). Examples include shareholders, creditors, government agencies, and the public. On the other hand, managerial accounting information includes both historical and estimated data used by management in conducting daily operations, planning future operations, and developing overall business strategies (Vallabhaneni, 2003). Managerial accounting also includes information for decision-making, planning, directing, controlling an organization's operations, and appraising its competitive position. Managerial accounting has internal users of information. These users comprise of business managers at all levels in the organization. Financial accounting uses external users of information. These users include stockholders, financial analysts, lenders, unions, consumer groups, and government agencies. This is hard data, and must meet audit criteria to be acceptable. Managerial Accounting rules are set within the company to carry out management objectives related to adding value to the company. Managerial accounting data must only be relevant for management decisions.
As we take a closer look at reports, managerial accounting use cost of production reports for decision-making. These comprise preparing detailed plans, budgets, forecasts, and performance reports for internal decision makers. Managerial accounting aids managers plan and administrate the company's operations. Accountants prepare budgets to communicate management's goals in financial terms by identifying, measuring, accumulating, analyzing, interpreting, and communicating information. After a budget has been adopted, performance reports compare actual results with the budget. Cost accountants help management keep track of how much it costs a company to make the product, or service (Shpargalka, 1999). Financial accounting incorporates preparing business financial statements mainly for users outside the business. These reports are used by owners, potential owners of a business, and by people who have loaned company money. In addition, stockholders, suppliers, and banks also benefit from the financial reports that are generated (Horngreen, Stratton, & Sundem, 2002).
The table below will explain the differences between financial and managerial accounting (Weygandt, Kieso, & Kimmel, 2001).
How to Cite this Page
"Mangerial And Financial Accounting Report." 123HelpMe.com. 17 Jan 2019
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- Business requires the appropriation of funds and the analysis of how these funds are and should be used. The primary task of an accountant is to account for all transactions that were done over a period of time for a specific organization and to arrange these facts into financial statements that can be analyzed. The two main types of accounting, financial and managerial accounting are used to evaluate a businesses financial status through financial information that is specific to the audience. Although financial and managerial accounting use similar primary financial statements, the analysis of the documents and the information presented differs tremendously primarily because the financial a... [tags: accounting, financial, business]
1940 words (5.5 pages)
- Financial and Managerial accounting are used for making sound financial decisions about an organization. They provide information of past quantitative financial activities and are useful in making future economic decisions. (Albrecht, Stice, Stice, & Skousen, 2002) The same financial data is used to derive reports for each accounting process yet they differ in some ways. Financial accounting primarily provides external reports for external users such as stock holders, creditors, regulating authority and others.... [tags: Accounting, ]
1091 words (3.1 pages)
- Financial and Managerial Accounting: What's The Difference. Whether it is a sole proprietorship, partnership, corporation, or a limited liability company, all businesses survive on the buying/selling of goods and services for cash or credit. They may buy land and build office complexes, stores, or factories. They may buy supplies, equipment, merchandise to sell, and/or the raw materials required to manufacture goods. They hire employees, pay salaries and benefits. All of these "business" activities need to be measured, analyzed, and recorded.... [tags: Financial Managerial Accounting Compare contrast]
1929 words (5.5 pages)
- Managerial accounting, also known as cost accounting, is defined by the textbook as the phase of accounting that is related to providing information to managers for use within the organization (Noreen, Brewer, & Garrison, 2014, p. 19). Managerial accounting information is aimed at helping managers within the organization make sound business decisions. On the other hand, financial accounting is focused on providing information to individuals outside the organization. Managers rely on cost accounting to provide them with an idea of the actual expenses related to processes, departments, operations or products which are the basis of their budget procedures.... [tags: Management accounting, Cost accounting, Ethics]
912 words (2.6 pages)
- Simply stated, the financial accountant is the number cruncher while the managerial accountant is the analyzer. However, it is not that simple. Most experts are fairly consistent with their definitions of what the financial accounting entails, however, defining managerial accounting appears to be opinion dependent. As the population of the occupation grows so does the defined responsibilities involved. The general consensus of financial accounting is that it reports past results using historical-cost accounting.... [tags: Financial Accounting]
1355 words (3.9 pages)
- In this essay I am going to explain and identify external users of accounting information and give detail on the main characteristics and how these characteristics and the conceptual framework develop the benefits of financial statements for external users. Financial accounting includes information distributed to external users that are not part of the enterprise, e.g. stockholders, creditors, customers and suppliers, although the information is also of interest to the company's officers and managers.... [tags: Accounting / Finance]
1596 words (4.6 pages)
- Introduction Financial accounting that is about reporting and summarizing the transactions of business and provide an accurate financial reports or financial statements such comprehensive income and finacial position (Averkamp, 2014). However, if investing in a business and want to acquire more profit, the financial statemnet of company is must be analysed before taking a decision. This essay will explains that financial statements between two companies about four years comprehensive income statements and four years statements of financial position.... [tags: comprehensive income and finacial position]
1001 words (2.9 pages)
- The accounting system we use today started in Venice in renaissance period over 520 years ago. The trade business increased hugely during this time and all the financial recordings had to be written down to help people see how their business is doing. During that time in 1494 the first book about was published in accounting by Luca Paciolli and was called “The Collected Knowledge of Arithmetic, Geometry, Proportion and Proportionality”. He was called “The father of Accounting” and most of his described principles have been used up until this day.... [tags: Accounting]
1499 words (4.3 pages)
- This paper is meant to give an informative view on how financial accounting is used to help small and large businesses make positive and safe financial decisions. It is designed to help small business owners without a vast knowledge or understanding of accounting or of financial reports achieves maximum growth. We will examine the importance of financial reports as well as being able to account for a company’s assets and spending. Through proper accounting and reporting companies have a better way of assimilating what areas can be improved by comparing the reports of prior years and evaluating the differences in what was done then and now.... [tags: Business Management Accounting]
718 words (2.1 pages)
- Accounting - It is the process of identifying, recording and summarizing economic data about the organization and reporting it to decision makers. Financial Accounting - It serves external decision makers such as Stockholders, suppliers, banks, and government agencies Management Accounting – It serves internal decision makers, such as top executives, department heads, college deans, and other people at management levels within the organization. Questions on Financial situation about the company: A) What is the financial picture of the organization on a given day.... [tags: Business Accounting]
1010 words (2.9 pages)
Financial Accounting Managerial Accounting
Primary Users of Reports
External users, who are stockholders, creditors, and regulatory agencies Internal users, who are officers, department heads, managers, and supervisors in the company
Types and Frequency of Reports
Classified financial statements Issued quarterly and annually. Internal reports Issued as often as needed.
Purpose of Reports
To provide general-purpose information for all users To provide special-purpose information for a particular user for a specific decision
Content of Reports
Belongs to entity as a whole and is highly aggregated (condensed).
Limited to double-entry accounting system and cost data
Reporting standard is the Generally Accepted Accounting Principles. Refers to subunits of the entity and may be detailed
May extend beyond double-entry accounting system to any type of relevant data
Reporting standard is relevance to the decision to be made.
Annual independent audit by certified public accountant No independent audits.
As mention in our textbook appendix A, of Hilton−Maher−Selto, (2005), they are four ethical standards for managerial accountants that the Institute of Management Accountants (IMA) must have. These are having competence, being confidential, showing integrity, and being objective.
Competence - In having competence, a managerial accountant must maintain expert knowledge and skills, follow laws, regulations, examine all suitable data, and provide complete information. In my line work, I refer competence to the ability to do my job. My earned knowledge, training and how I portrait this ability could depend in me saving someone's live. Been a Soldier is more than just been an ordinary citizen; we have the duty to ensure the safeguard of others as well.
Confidentiality - A managerial accountant should not disclose confidential information unless legally obligated to do so and they should not allow subordinates to disclose confidential information as well. Again, in my line of work we control classified information that cannot be made public at will. Some of the information has to do with assignment that could ultimately cost Soldiers and civilian lives. Other confidential information also includes information of national security that cannot be made public for obvious reason.
Integrity - in showing integrity, a managerial accountant must communicate favorable as well as unfavorable information including limit of the information, avoid obvious or actual conflicts of interests, and avoid activities that will question the profession. An illustrative example of this topic is when Soldiers leave in a risky operation and a fire fight develops, I have to show the integrity to say the truth to what really happen and not cover up a story. This is to holding true to one's value.
Objectivity - In being objective, a managerial accountant must communicate all information properly and fully disclose all relevant information. As a supervisor of many Soldiers, I have to account and ensure that all units under my command have the same fair treatment no matter what is the mission or task. I have to treat each one of them the same and be neutral.
For Managerial Accountants is vital to adhere strictly to the above standards to thrive ultimately in today's society.
Overall, financial, and managerial accountings both are important aspects of the business world. Most companies have some form of each type of accounting merged into their business operations. By following the suitable standards for each, the business will be able to keep successfully track of their financial standing for internal as well as external purposes.
Butler, M. (2002). Can Financial Professionals Be Trusted? Strategic Finance, Vol. 84 Issue 2, p5, 1p, 1c; (AN 7167232). Retrieved December 14, 2005, from EBSCO Research Data Base
D'Aquila, J., M. (1998). Is The Control Environment Related To Financial Reporting Decisions? Managerial Auditing Journal, 13(8), 472-478. Retrieved December 17, 2005, from ABI/INFORM Global database.
Hilton−Maher−Selto. (2005). Cost Management & Strategic Decision Making. Cost Management, Third Edition. The McGraw-Hill Companies
Horngreen C. T., Stratton, W. O., & Sundem, G. L. (2002). Introduction to Management Accounting (12th ed.). Prentice Hall: New Jersey.
Judd, N. (2005). Independent Study Confirms CMA and CFM Credentials Set the Worldwide Standard for Accountants in Business. PR Newswire, 1. Retrieved December 17, 2005, from Business Dateline database.
Shpargalka. (1999). Accounting. Retrieved December 17, 2005, from http://www.library.by/shpargalka/belarus/english/001/eng-013.htm
Vallabhaneni, S. R. (2003). The Differences between Managerial and Financial Accounting. Accounting. Retrieved December 17, 2005, from http://www.srvbooks.com/samples/module600.pdf
Weygandt, Kieso, & Kimmel. (2001).Managerial Accounting. Retrieved December 14, 2005, from http://www.gpc.edu/~vstarbuc/Presentations/Acct201Weygandtppt/ |
"PETER DRUCKER" QUOTES
- Almost everybody today believes that nothing in economic history has ever moved
as fast as, or had a greater impact than, the Information Revolution. But the
Industrial Revolution moved at least as fast in the same time span, and had
probably an equal impact if not a greater one.
- A manager is responsible for the application and performance of knowledge.
- Business has only two functions - marketing and innovation.
- Checking the results of a decision against its expectations shows executives
what their strengths are, where they need to improve, and where they lack
knowledge or information.
- Dealmaking beats working. That's why there are deals that make no sense.
- Effective leadership is not about making speeches or being liked; leadership is
defined by results not attributes.
- Efficiency is doing better what is already being done.
- Efficiency is doing things right; effectiveness is doing the right things.
- Executives owe it to the organization and to their fellow workers not to
tolerate nonperforming individuals in important jobs.
- For centuries, we have attempted communication "downward". This however, cannot
work, no matter how hard and how intelligently we try. It cannot work, first,
because it focuses on what we want to say. It assumes, in other words, that the
utterer communicates. There can be no communication if it is conceived as going
from "I" to "thou." Communication works only from one member of "us" to
- In all recorded history there has not been one economist who has had to worry
about where the next meal would come from.
- It's more important to do the right thing than to do things right.
- Making good decisions is a crucial skill at every level.
- Management by objectives works if you first think through your objectives.
Ninety percent of the time you haven't.
- Management is doing things right; leadership is doing the right things.
- Management means, in the last analysis, the substitution of thought for brawn
and muscle, of knowledge for folklore and superstition, and of cooperation for
force. . .
- Most discussions of decision making assume that only senior executives make
decisions or that only senior executives' decisions matter. This is a dangerous
- Most management people I know still believe that the Bible begins with the
words: 'In the beginning God created stable exchange rates.'
- No decision has been made unless carrying it out in specific steps has become
someone’s work assignment and responsibility.”
- No executive has ever suffered because his subordinates were strong and
- One of the great movements in my lifetime among educated people is the need to
commit themselves to action. Most people are not satisfied with giving money;
we also feel we need to work.
- People who don't take risks generally make about two big mistakes a year.
- People who do take risks generally make about two big mistakes a year.
- Plans are only good intentions unless they immediately degenerate into hard
- Quality in a product or service is not what the supplier puts in. It is what
the customer gets out and is willing to pay for. A product is not quality
because it is hard to make and costs a lot of money, as manufacturers typically
believe. This is incompetence. Customers pay only for what is of use to them
and gives them value. Nothing else constitutes quality.
- So much of what we call management consists in making it difficult for people
- Some of the best business and nonprofit CEOs I've worked with over a
sixty-five-year consulting career were not stereotypical leaders. They were all
over the map in terms of their personalities, attitudes, values, strengths, and
- Suppliers and especially manufacturers have market power because they have
information about a product or a service that the customer does not and cannot
have, and does not need if he can trust the brand. This explains the
profitability of brands.
- Teamwork is neither 'good' nor 'desirable'. It is a fact. Wherever people work
together or play together they do so as a team. Which team to use for what
purpose is a crucial, difficult and risky decision that is even harder to
unmake. Managements have yet to learn how to make it. Peter F Drucker
- The best way to predict the future .... is to create it!
- The computer is a moron.
- The corporation is the “master”, the employee is the “servant”. Because the
corporation owns the means of production without which the employee could not
make a living, the employee needs the corporation more than vice versa.
- The entrepreneur always searches for change, responds to it, and exploits it as
- The most efficient way to produce anything is to bring together under one
management as many as possible of the activities needed to turn out the product.
- The most important thing in communication is to hear what isn't being said.
- The most serious mistakes are not being made as a result of wrong answers. The
truly dangerous thing is asking the wrong question.
- The new information technology.. Internet and e-mail.. have practically
eliminated the physical costs of communications.
- The purpose of business is to create and keep a customer.
- There are an enormous number of managers who have retired on the job.
- There is nothing so useless as doing efficiently that which should not be done
- Time is the scarcest resource and unless it is managed nothing else can be
- Too many mergers resemble the marriage of two cripples who become twice as old,
twice as bureaucratic and twice as undynamic.
- Unless commitment is made, there are only promises and hopes... but no plans.
- We know nothing about motivation. All we can do is write books about it.
- What you have to do and the way you have to do it is incredibly simple. Whether
you are willing to do it is another matter.
- Whenever you see a successful business, someone once made a courageous decision.
CONTENTS page -- paul's (OLD) e-scrapbook
Paul Quek's Website
Paul Quek's Website
(mirror of http://paulquek888.tripod.com)
RSS Cash Secrets
(Liz Tomey's FREE content-generating system) |
One of the key design requirements of the Saddleworth Community Hydro scheme is to ensure that, once the compensation flow water has passed through the Crossflow turbine, the water must be returned to the natural watercourse.
Prior to the Crossflow turbine being installed, the compensation flow (and its useful energy) was channeled directly to the stilling basin, before re-entering the natural watercourse – in this case Chew Brook. To re-establish this process, a physical link had to be made between the culvert (leading from the turbine house) to the stilling basin. This involved cutting a large hole from the culvert through the outside wall of the stilling basin, through which a pipe was inserted. Water will then pass through the turbine, into the culvert and through the pipe into the stilling basin.
However, in order to make this connection, the first step was to create a water free working area within the stilling basin. This was achieved using an “Aqua Dam”, which was lifted down into the stilling basin using an excavator and placed into position. The working area was then pumped free of water in order to create a water free working environment.
With the working area within the stilling basin reasonably free from water, it was then possible to break through the existing wall by working from both sides. The method for cutting involved the use of wire cutters to minimise damage to operatives and the stilling basin structure that other methods could cause. Once the breakthrough was achieved, a large plastic pipe was then inserted with a gravel bed and surround. With the stilling basin still free from water, the opening around the pipe will be made good. The Aqua Dam can then be removed.
Are you considering a hydropower project in the UK, Ireland or overseas?
The first step to develop any small or micro hydropower site is to conduct a full feasibility study.
Contact us about a feasibility study today!
Once complete, you will understand the site potential and be guided through the next steps to develop your project. You can read more about hydropower in our Hydro Learning Centre.
Maximise the financial return of you hydropower system with GoFlo Travelling Screens. Find out more here. |
Why revive the Bataan Nuclear Power Plant?
BATAAN, Philippines – Public decision could seal the fate of the old and dormant Bataan Nuclear Power Plant (BNPP), but first, the state would have to lay the options down clearly.
In December 2015, International Atomic Energy Agency (IAEA) chief Yukiya Amano said his agency is ready to assist the Philippines should it decide to revive the BNPP.
"If you decide (to reopen it) we are ready to help,” Amano said.
But the United Nations' nuclear watchdog clarified that making a decision would be up to the Philippine government.
Amano was in the country to attend the 3rd Nuclear Congress, a multi-sectoral meeting assessing the progress made by the Philippines in using nuclear energy, along with the challenges in harnessing it.
For every year the matter sits in indecision alley, the government is spending P50 million ($1.06 million) to maintain the mothballed plant. Such has been the case since, in 2007, the Philippines completed the payment of the $2.3-billion debt that had been used to fund the BNPP's construction between 1976 and 1984.
When the Marcos dictatorship crumbled, the administration of Corazon Aquino transferred the nuclear plant’s assets to the government without ever operating it. It was mothballed in the wake of the 1986 Chernobyl accident in Russia.
However, talks about fueling the power facility were plenty 8 years ago. The Department of Energy (DOE) led the gargantuan task of drafting the country’s nuclear energy policy. Owner and operator National Power Corporation (NAPOCOR) signed a memorandum of understanding with the Korean Electric Power Company (KEPCO) to assess the viability of harnessing alternative energy in the aging station.
It would take $1 billion spread over 4 years to restore the BNPP, the study concluded. 80% of the plant and equipment needed overhauling. The rest had to be replaced.
In the following year, the House of Representatives’ Committee on Energy approved a bill aimed at commissioning and rehabilitating it.
Everything stalled following Japan's Fukushima Daiichi disaster in 2011 – the worst nuclear crisis since Chernobyl.
The issue reawakened in an energy committee hearing in Congress the last quarter of 2015. (READ: Regulatory body pushed to study Bataan Nuclear Power Plant revival)
In October last year, the DOE also convened the inter-agency core group responsible for creating the policy alongside NAPOCOR and the Philippine Nuclear Research Institute (PNRI), whose mandate is to regulate nuclear power plants.
"As a technology, nuclear power has been shown to be safe, clean, and cheap as evidenced by the continuing operation of several nuclear power plants all over the world,” Teofilo Leonin, PNRI nuclear division chief, told Rappler in an email.
But according to Leonin, NAPOCOR has to prove to the regulatory body “the safety and minimal impact to the natural environment of [the BNPP's] operation.”
Fears of a Fukushima repeat
In a country prone to earthquakes and other catastrophes, fears of a Fukushima repeat cannot be shrugged off.
The tsunami produced by a 9.0 magnitude earthquake in Japan damaged Fukushima’s several reactors and disabled the reactors’ cooling systems, which resulted in the release of radioactive materials.
The Japanese nuclear plant had a peak horizontal ground acceleration of 0.1g. The IAEA requires a minimum of 0.1g peak horizontal ground acceleration “regardless of any lower apparent exposure to seismic hazard.”
The BNPP has a peak acceleration of 0.4g. NAPOCOR claims it can withstand the greatest tremor projected to hit Luzon.
“It's well-protected from tsunami,” NAPOCOR Asset Preservation Department Manager Mauro Marcelo Jr told the media in a tour of the massive yet sleeping powerhouse. It lies 18 meters above sea level in a 389-hectare lot in Napot Point, Morong, Bataan.
The BNPP’s 3-loop design is similar to that of 3 running power stations in the world: Angra I in Brazil, Krško in Slovenia, and Kori II in South Korea. Kori II has won awards for its remarkable uptime and reliability.
“The future owner or operator of the BNPP will have to go through the whole regulatory process and submit pertinent documents to support [its claims],” Leonin said.
“This includes documents showing that a Fukushima-type disaster or any kind of natural or man-made disaster will have minimal effects to the population and the environment, as prescribed by national and international requirements and standards."
The PNRI is speeding up the structuring of its regulatory requirements. Once a national policy is in place, the BNPP and all subsequent nuclear power plants will be subjected to the regulatory process, which will take at least 5 years to complete.
Cheaper than coal
Apart from seeking a study of the viability of reopening the BNPP, the DOE also recommended converting what Bloomberg dubbed an "empty shell" into a coal plant.
But former Pangasinan representative Mark Cojuangco, who supports nuclear fuel, argued that the country cannot be tied further to fossil fuels. In the 14th Congress, he filed House Bill 4631 or the Bataan Nuclear Power Plant Commissioning Act of 2008.
According to Cojuangco, 1.7 million tons of coal, equivalent to a 200-kilometer train, is needed to power a plant non-stop for an entire year. In contrast, he said, it would take only a little amount of nuclear fuel to produce great power. Nuclear fuel that could fit into a medium truck would be enough to generate electricity every 18 months. That could replace the 1.7 million tons of coal needed to produce electricity every year, or 2.5 million tons of coal every 18 months.
In a Rappler interview two years ago, Marcelo said the 620-megawatt (MW) capacity in Bataan can supply 10% of the Luzon grid. The island region’s power requirement reached a peak demand of 8,791 MW during the summer. But in the congressional hearing late last year, NAPOCOR was not able to provide figures on how this would translate to consumer costs.
Oriental Mindoro 2nd District Representative Reynaldo Umali, the chairman of the House Energy Committee, then asked NAPOCOR to rid the public of false hopes concerning lower electricity charges.
The real costs of nuclear power
Umali also mentioned that the $1 billion rehabilitation budget could build several renewable energy plants. In 2011, the government established the National Renewable Energy Program, which aims to boost the current capacity of renewable sources from 5,438 MW to 15,304 MW by 2030.
But Cojuangco believes renewable energy – solar and wind, in particular – are less reliable sources due to their non-baseload nature, which would require an investment 4.34 times bigger than backup sources such as coal, gas, and nuclear.
For environmental group Greenpeace, the imminent costs of commissioning and operating the BNPP would outweigh its power generation benefits, given the failures of other nuclear facilities abroad, and the burden of debt that was passed on to Filipinos. Greenpeace released its position paper amid the discussion on the merits of Cojuangco’s HB 4631. (READ: 'Nuclear power to lower electricity costs')
The group also cited Finland as an example in constructing a new reactor that took €1.5 billion in excess in 2009 values. Rehabilitation would surpass $1 billion because of “past experience on nuclear plant overruns and delays, the BNPP’s age, and documented defects."
Greenpeace also urged stakeholders to look closely at the price tag of all stages of a nuclear plant’s lifetime and beyond. In nuclear power, direct costs are incurred for construction, operations and maintenance (including uranium fuel), waste storage, and decommissioning.
The group argued that the commissioning budget will be taken from state coffers – as such, there will be “provisions to raise money via surcharges to consumers – and/or international or domestic loans."
The PNRI’s Leonin said that it does not matter if it would take $1 billion or more, as long as the BNPP, if revived, performs safely in the long run.
“[It is] the responsibility of the owner or operator to inform the public of all issues for them to have an informed basis for making a stand,” Leonin said.
He added that the country’s President would still have the last say when it comes to reviving nuclear power. The chief executive holds the power to halt plebiscites.
For Philippine Ambassador to Austria Zeneida Angara Collinson, the country’s representative to the IAEA in Vienna, the Philippines should not fear nuclear power.
Speaking on the sidelines of the 3rd Nuclear Congress, Collinson noted it would be interesting to see if nuclear energy is on the agenda of presidential candidates in the 2016 Philippine elections. – Rappler.com
Shadz Loresco is a freelance business writer for both online and print. Follow her on Twitter: @shadzloresco.
$1 = P47.27 |
Data culled from the 2002 Ag Census shows that approximately 46% of Wisconsin farmers who identified farming as their primary occupation are 55 years old or older. The United States Department of Agriculture estimates that over 500,000 of the nation’s two million farmers will retire during the next decade and will be replaced by 350,000 entrants. Using a conservative extrapolation, this means a potential for thousands of farm transfers in the Wisconsin over the next ten years, at a time when we see complex and rapid changes in the industry due to technological innovations, trade and other government policies, a growing world population, and urban pressures on agricultural lands, as well as conservation issues and environmental concerns. Research conducted in the four southwest counties of Wisconsin shows that a minority of farmers have identified a successor and/or developed farm business succession plans. The research indicates a majority of farmers have not discussed their retirement or succession plans with anyone. Survey responses show farmers in the study valuing an equal division of assets for inheritance, which may negatively impact on-farm heirs’ ability to continue farming. University of WI Cooperative Extension developed one-day, three-day and four-day farm succession programs to build awareness and facilitate the development of succession plans. Instructors lead participants through visioning and goal setting exercises with the use of the case farm, Bella Acres. At the end of the three and four-day workshops participants identified personal and business goals and developed an action plan to move their plan forward.
|Conference||2007 National Extension Risk Management Education Conference|
|Presentation Type||60-Minute Concurrent| |
Traceability is the ability to verify the history, location, or application of an item by means of documented recorded identification. – Wikipedia
Traceability requirements are essential in CE marking, as they support the market surveillance. They are designed to trace the history of the product and include procedures such as labelling the product and identifying the economic operators in the supply chain (manufacturer, importer, distributor, retailer).
The traceability of the product’s history is important because it makes the enforcement of corrective measures, like withdrawals and recalls, possible. This is achieved by looking at the roles of the economic operator along the supply chain, which leads to the determination of non-compliant products. In other words, this enables market surveillance authorities to trace the products from the factory gate to the consumer. Traceability requirements also help the manufacturer maintain effective control of the production phase and the suppliers, as well as reduce the impact of corrective measures, depending on the traceability system used.
EU legislation is prescriptive only in terms of requirements, without imposing the means to meet the provisions. It also does not foresee any specific type of technology to be used, such as printing or moulding, leaving the choice of the traceability system to the manufacturer, depending on what is most appropriate for the product and the distribution system.
A key traceability requirement is the indication of the manufacturer’s name and address on the product, as well as the importer’s if it is the case. This allows the market surveillance authorities to contact the economic operator responsible for placing the unsafe products on the market. There is no explicit obligation to precede the address/addresses by the works ‘Manufactured by’, ‘Imported by’ or ‘Represented by’, but there is an implicit one, in order for the Legislator to identify the role of each economic operator on the chain. However, it is not required to translate this information in all necessary languages.
The legal provisions establishing the practices regarding traceability can be found in Regulation (EC) 765/2008 and Decision 768/2008/EC. More specifically, the following is required from the economic operator:
1. Indication of name and address of manufacturer:
Manufacturers must “indicate their name, registered trade name or registered trade mark and the address at which they can be contacted, on the product or, where that is not possible, on its packaging or in a document accompanying the product. The address must indicate a single point at which the manufacturer can be contacted.” (European Commission, 2013) This is an obligation regardless of whether the manufacturer is located in the EU or not. Cases where affixing the name and address on the product is not compulsory are the ones where the technical or economic conditions make it unreasonable. For example, products such as hearing aids are too small to carry this information. The assessment of these conditions is to be made by the manufacturer. Only one single contact point in the EU is allowed, which does not have to be the location where the manufacturer is established; it can also be the address of an Authorized Representative.
2. Indication of name and address of importer:
For importers, the same rules apply as for manufacturers. However, the importer must not place the name and address in such a way that it covers the information provided by the manufacturer. Also, if placing the name and address on the product implies opening the package the importer shall provide this information on the package.
3. Identification element:
Manufacturers also have “to ensure that their products bear a type, batch, serial or model number or other element allowing their identification, or, where the size or nature of the product does not allow it, that the required information is provided on the packaging or in a document accompanying the product.” (European Commission, 2013) They can opt for the preferred identification element, as long as it makes traceability possible. If it is the case, the identification element is the same as the one used in the EU Declaration of Conformity. If the product is composed of several parts, the manufacturer may choose between placing the identification number on the package/accompanying document, using additional markings for the individual components or giving an item number (the ‘SKU’ – stock keeping unit) to the whole product, which would also be the one enclosed within the EU Declaration of Conformity.
4. Identification of economic operators:
Economic operators have to “identify any economic operator who has supplied them with a product and any economic operator to whom they have supplied a product,” (European Commission, 2013), excluding the end consumer, who for the purposes of the relevant legislation is not considered an economic operator. This applies for a period of minimum 10 years. The compliance procedure with this requirement is not explicitly described; however, it is recommended that economic operators keep the documents which enable traceability (e.g. invoices) for a longer period of time in case they have to be presented to the market surveillance authorities.
European Commission. (2013). The ‘Blue Guide’ on the implementation of EU product rules.
Share this Post |
Solar Thermal Hydrogen Production – An AdventureKlaus Röhrich
Fifty million tons of hydrogen are used every year mainly for making fertilizers and for refining petroleum. Today, it is also explored as an energy storage medium because electricity storage at large scale is costly and inefficient. Hydrogen can be used as a chemical, in particular to make hydrocarbons, so it can replace petroleum, and it is a source of clean power –for transportation for example- because its use results in water only and nothing else.
Wouldn’t it be great to make hydrogen in a clean and sustainable way?
The history of H2P began in 2001 on the premises of Creative Services s.a.r.l. with contemplations about what to do with acetylene. From a coffee table discussion the subject evolved into patent applications and the search for funds to realize the idea.
H2P is a hydrogen production technology based on thermal water splitting at temperatures above 2200°C. The concept employs a particular thermal configuration that allows extracting oxygen directly from the hot, dissociated steam and extracting hydrogen outside a gas permeable insulation. Thermal efficiency is thus maximized because no ballast gases are transported between the cold and hot zones. Although any heat source providing sufficient power could be employed, only heating with concentrated sunlight is feasible.
Solar hydrogen generators, just like CSP plants, would be installed in places with plenty sunshine. Besides a bit of power for controls, the hydrogen is produced without electricity, without fossil feed, without emissions or any other pollution. The social and economic impact would be dramatic.
The three inventors got in contact with business developers and eventually created Clean Hydrogen Producers ltd. in 2006. One and a half million euros were raised. However, out of this sum cash was diverted by the business developers. Furthermore the setup was used by them for fraudulent financial activities funnelling approximately thirty million dollars from private investors through a fund in Liechtenstein to unknown receivers. Obviously, this ended up with lawyers and prosecutors in the USA, UK and Switzerland.
This is a good example of how well-intended tools for start-ups and small private companies could be turned into an instrument to extract and divert funds from innocent investors wanting to support a promising enterprise. I presented the case in consecutive years to students at a Geneva based business university.
In spite of the fraud that as going on or better before it became apparent, we attempted to build a first hydrogen generator. We were assisted by universities and research institutes, foremost by the Geneva Engineering School (Hes-SO/HEPIA) and the Swedish Ceramics Institute (SWEREA/IVF). As could be expected we destroyed some ceramics components in our first heating trials reaching merely 1400°C with our furnace in Gothenburg. Unfortunately the Swedish project leader fell sick during this time and was out. Together with the fraud that was going on, our development work came to stand still at the end of 2007 and Clean Hydrogen Producers ltd. was abandoned.
However, based on the encouragement we had received from experts in science, industry and business, and not being intimidated by the challenges working above 2200°C we began anew. The intellectual property, three patents granted in over fifty countries worldwide, was moved to a new structure H2 Power Systems ltd. About half of those investors who contributed to the initial funding of Clean Hydrogen producers came along.
H2 Power Systems was originally based on the Isle of Man. We had talked to various fund raisers, venture capitalists and private investors. All saw the economic potential of an efficient and cheap solar hydrogen generator. And several recommended having the business in a tax favourable location. We were stupid enough to follow such advice. During 2008 and 2009 we first pursued financing with German private investors. This financing attempt failed because the main investor died of a heart attack in November 2008. He would have asked us to move our business to Germany.
While struggling with applications for public support we fell on an Irish investor. After five months of due diligence, in February 2010, H2 Power Systems was moved to Dublin and received two and a half million euros. Our hope was to build a laboratory prototype hydrogen generator within the next two years.
We attracted the interest of the Fraunhofer IKTS. They agreed to develop furnaces for testing high temperatures ceramics and making hydrogen. The Solar Laboratory at PSI expressed their interest building our solar hydrogen generator once we provided the core components and, of course, funds. HEPIA, was still supporting us, and the Energy Research Center of The Netherlands provided us with their high performance hydrogen filters.
During 2010 we conceived and built a device for testing ceramic filters above 2200°C. Until 2000°C everything was fine. Above, we made quite a number of new experiences with chemical reactions and ceramic structural changes before we were able to heat steam to close to 2300°C. We measured the oxygen permeability of various ceramic filters. In the year that followed we constructed an improved furnace with an extension for extracting not only the oxygen but also the hydrogen from the steam. On 26 July 2011 we produced hydrogen for the first time.
It was now time to improve our filters, which had reached five to ten per cent of what we called economically viable performance. Unfortunately this was and still is a costly and time consuming activity. Within the given possibilities we could develop a new material, produce some filters and test them in approximately three month and at a cost around thirty thousand euros. As you easily see, that activity absorbed a lot of our time and resources.
We had just begun the R&D on the high temperature ceramic oxygen filters when the next event hit the project. In October 2012 our sponsor went into liquidation and somewhat later H2 Power Systems ran out of funds.
The events that lead to this liquidation are quite revealing in showing the interconnections between a small private enterprise like ours and international business and politics. Our sponsor had made his fortune in real estate. During the crisis of 2008 a lot of his property lost its value. When it came to reshuffling debt, these were not covered any more by the actual value of the property. The Irish government in form of NAMA (National Asset Management Agency) came in and secured these loans. So far so good, as the real estate provided regular income and business went on as usual. But then the Irish government got pressure from the European Union to get rid of its bad debt and reduce its deficit. That meant selling the related property or business, and the forced liquidation of our sponsor.
We went on with our R&D with decreasing activity roughly into 2014 when all reserves were consumed. The liquidation of the sponsor entailed that the Irish government ordered an accounting firm to sell the sponsor’s real estate, a value of the order of two billion euros, and at the same time his share in H2 Power Systems.
As H2 Power Systems was only a minor crumb of the cake, and the liquidator was constraint to not doing anything other than selling the share in H2 Power Systems, new fund raising was not possible anymore. A new investor would have had to buy out the liquidator for considerable money before investing. We could not find anybody interested in doing so.
By the end of 2015 the last penny of H2 Power Systems had been spent. Patent fees were not paid anymore. In July 2016 the last patents lapsed.
Gladly, our story shows that an ambitious project, ambitious in terms of cost and time, can find financing. There are people not only thinking about our future but also putting there money where there mouth is. Sadly, there are also people abusing good ideas. And sometimes events at the larger scale can hit the small man.
I deeply regret that our solar hydrogen generator development found such a sad end. All the research results are still available, and the material and equipment is in storage in Geneva. If there was someone out there with interest and resources, I’d be happy to give it another try, and I’m sure my co-inventors would do so too. |
Energy companies use surface mining to recover resources from about 20 per cent of Alberta’s oil sands. It’s our job to ensure that companies extract oil from the sand responsibly. Once a company has finished mining in an area, we also regulate how the land is returned to its original (or equivalent) state. Companies must submit applications and receive our approval before developing this way.
How does surface mining work?
In surface mining, companies use trucks and shovels to scoop up oil sands from the ground. The oil sands are then transported to extraction plants, where bitumen (heavy oil) is separated from the sand.
A company might decide to sell its extracted bitumen as a product on its own. Alternatively, it might decide to further upgrade the bitumen to synthetic crude oil. In either case, any environmental impacts such as tailings must be managed.
Protecting Albertans and the Environment
Companies must meet our tailings performance commitments so that resources are conserved and reclamation can happen as soon as possible. Learn how we’re addressing the environmental impacts of tailings through the Tailings Management Framework.
We also require companies to estimate the volume of bitumen that can be recovered from their oil sands operations. Directive 082: Operating Criteria: Resource Recovery Requirements for Oil Sands Mine and Processing Plant Operations helps companies identify that amount.
Compliance and Enforcement
We conduct regular inspections and audits to make sure that companies are following our requirements. If we find that a company isn’t complying, we’ll take the appropriate compliance and enforcement actions and share our findings on the Compliance Dashboard.
Our annual Water Use Performance Report outlines and evaluates how companies use water in mining operations. |
Washington State has 70% of the sun that Los Angeles receives. The Pacific NW gets more solar energy that of Germany, the leading global user of solar! We have some of the best incentives in the country, and solar panels are actually more efficient in cooler climates. Thousands of homes and business have solar installed, saving them money.
To learn about the solar potential of your rooftop, use the PV Watts website. Plug in your address, map out a solar system on your roof that faces closest to south and find an estimate for what your roof can produce.
Long, Sunny Summer Days
Western Washington is famous for it’s long, rainy winters; but its position north of the 45th parallel makes it’s summer-hours the main attraction for a solar investment. Summers in the Pacific Northwest are generally sunny and clear, and daylight hours can stretch from nearly 5 AM to 10 PM in much of the state. These long, clear and (mostly) cool days produce large amounts of electricity. This electricity feeds back into the power grid, allowing energy companies to reduce the amount of power they are producing at the time.
At first glance, these long, summer days wouldn’t appear to help out much during cloudy, winter months. However, there is a clever legal policy in Washington called “Net Metering” that changes the game. During the summer, when your home or business produces more solar power than it is consuming, the excess electricity is pushed out into the grid and is used by the neighboring properties in your area. As the electricity does this, it passes through your electric meter, essentially causing it to run backwards.
While electrical production in the cloudy winter months is not nearly what it is in the summer, solar panels do produce power through the clouds. With careful attention to energy efficiency in the home, it is even possible to realize a net gain on occasion; especially if your home remains empty for much of the day.
On average, of course, you will draw more power than you use during cloudy, Western Washington winters; but even brief periods of light clouds or patchy sun can make a significant difference. In the end, it is the the total power your system produces for the year that really matters most from an investment standpoint. |
by Dean Cawsey
What does the world look like ‘after’ coal? It goes without saying that the industry in Wales certainly isn’t what it used to be. When the industry employed a vast majority of working men in Wales, it provided cultural, social and economic growth in valleys across the country. A central component to transatlantic analysis of coal stems from the understanding that Wales’ western coalfields are, indeed, in a state of ‘after coal’. This differs from their Appalachian equivalents that are still very much ‘with coal’ and at the start of their journey of transition.
People sometimes forget that the decline in the Welsh industry began as early as the 1910s, and not as a result of the eventual death knell sounded by a certain Mrs Thatcher. During this period production peaked and more than 57 million tonnes of coal were produced by 232,000 men working in 620 mines. The inter-war years saw a recession in the coal industry as a result of overseas expansion and the industrialisation of poor, un-unionised territories where labour was cheap and resources were plentiful. Added to this was the shift to oil power in shipping and the naval Armed Forces. Despite a resurgence during the 1970s oil crisis, production continued to decline, and not even industrial action led by the most powerful workforce in the world, in the shape of the 1984/85 Miners’ Strike, was enough to counter the terminal blow from Prime Minister Thatcher.
Whilst the above description continues to be the mainstream understanding of the ‘life’ of coal in Wales, it does not mean that the industry is completely dead, and many communities across Britain, Neath Port Talbot in particular, still rely heavily on King Coal. This reliance not only manifests itself in terms of jobs, but also in terms of the energy that fuels the power stations that produce the electricity that boils the kettles in kitchens all over Wales. However, the numbers presented below perhaps demonstrate a dichotomy of reliance as the necessity of coal as energy isn’t matched by the production of coal as provider of jobs.
In 2012, the United Kingdom imported nearly 45 million tonnes of coal from countries as far away as Colombia, Russia, South Africa and the United States of America, some of which was used domestically to heat homes. The majority was burned in coal-fired power stations. Our evident consumption is not matched by production, though, and in the 47 UK-based coal sites (underground and surface mining) a paltry-in-comparison 12 million tonnes was returned during the same period.
Nevertheless, the coal industry in Britain continues to directly provide over 6000 jobs, 1100 of which are in Wales. In Neath Port Talbot, eight sites (five surface and three underground) produce over 72,000 tonnes of coal a month and create almost 800 jobs. This doesn’t include the Powys-based Nant Helen Opencast, which borders several Neath Port Talbot communities and provides an additional 130 jobs. Whilst the industry in the UK may well be on its deathbed, our consumption of coal is alive and kicking, and the positive impact that jobs have on a community is evident. Together, this continued reliance on coal paints a picture that cannot be ignored.
We use a tremendous amount of coal but actually produce relatively little, despite the obvious benefits industry jobs bring to an area, as well as the reduction in the carbon footprint of coal mined and burned in Wales as opposed to, say, mined in Colombia and burned here. The argument for such a situation is always cost, and whilst the direct price of foreign coal may make economic sense, we are yet to know the real cost of energy dependence, de-industrialisation, continental logistics and global eco-systems.
Perhaps what is needed, in Wales at least, is a re-examination of the feasibility of mining Welsh coal, and what that would bring to the economic and social regeneration of our communities and the country as a whole. At a time when unemployment is at its highest in decades, the Westminster Government is hell-bent on implementing welfare reform which has the potential to decimate families, and good quality jobs are at a premium, the idea isn’t as crazy as it might first sound.
With thanks to Keith Jones, Manager of Onllwyn Coal Distribution Centre for help in researching information and statistics. |
Juracán Energy (JE)
In Puerto Rico, there is a high dependence on fossil fuels to produce electricity at high costs. After the collapse of the whole power grid, alternatives to generate electricity are a priority and renewable energy has become a very attractive source. As a team, we decided to use the energy produced by the wind to solve the lack of electrical power in remote communities and their respective water distribution systems. Due to space issues and topography, wind is considered a better alternative than solar photovoltaic systems.
In the aftermath of Hurricane Maria, it was clear that reliable wind turbines should be a priority for the team and would need to resist sustained winds of 155 miles per hour and gusts of up to 201 miles per hour. The design is a horizontal three-bladed wind turbine. It should be capable of self-starting and producing generative power without any external source in case of a catastrophe in the system. It should also be as cost-efective as possible by having components eliminated or a combination of efficient materials.
The strategy has been to implement a systems engineering approach to tackle such a complex and multidisciplinary project. From the beginning, it was established that the team would pursue a wind energy application relevant to Puerto Rico and a site for a wind farm. The initial approach was to assemble a multidisciplinary team within the School of Engineering to search for potential wind energy applications in the island. Once the application was decided, the team was divided into subsystem teams which managed the design of the individual components of the system. The subsystems will be integrated at a later stage in the project along with the fabrication of the prototype for the Collegiate Wind Competition to deliver the final product.
Our main strength came after the disastrous phenomenon that occurred in Puerto Rico with Hurricane Maria: unity. Maria taught us as a team to maintain unity no matter what, amd we realized that we do more as a team than as an individual. The Collegiate Wind Competition has provided us the opportunity to make something that can help our island recover and prepare for any other similar event. We are looking forward to acquiring experience in the development of sustainable energy system.
Our main obstacle is our topographic map, which is not favorable for wind turbines. The areas with more wind velocity are natural reserves or landmarks. Also, there are limitations on the wind velocities (too low) during wind turbine tests in the wind tunnel.
COLLEGIATE WIND COMPETITION OBJECTIVES
The JE Team believes the greatest asset that can be obtained from the Collegiate Wind Competition is the experience of working as an actual business, from the inception of the idea to the process of developing and marketing of a deliverable product. We hope to get involved in the renewable energy industry, build a background for our future, and use the acquired knowledge to better-implement wind energy on the island.
JE is engaging the communities by providing wind turbines and filtration pumps to provide energy and water to these communities. So far we have worked with two communities.
This webpage was submitted to the U.S. Department of Energy by the team. |
What is the Break Even Point?
The break even point (BEP) is the point when your revenues are equal to your costs. When this point is reached, there are no gains or losses, you break even. Keep in mind, this point is reached during the normal course of business – your company has sold some products/services and your business has incurred some expenses.
Revenues = Total Costs => Break Even Point
Why is the Break Even Point Important?
I know what you’re thinking – boring, boring, boring. Stay with me, I’m getting to the fun part. Your break even point is the magical number of units you need to sell in order to start making a profit. When your business is profitable, that means more money in your pocket!
Let’s say your business sells bicycles. You figure out your break even point is 250 bicycles. If you sell less than 250 bikes in the month, your business will operate in the red and you will loose money. This is because you did not sell enough bikes to cover your costs. However, if you sell more than 250 bikes in the month, your business will make a profit. Meaning you have sold enough bikes to cover your costs and have extra money left over! If you sell exactly 250 bikes, you will break even – you will not have a loss or a profit. You will exactly cover your costs for the month.
How is the Break Even Point Calculated?
Whether or not you’re good with numbers or like numbers, it is important to learn how to calculate your break even point. Don’t worry, I’ll try to take it easy on you.
From the formula above, you can see that you need to know 3 things in order to figure out your break even point (in terms of units).
1. Fixed Costs – costs that don’t change based on your level of sales (overhead costs – rent, phone, internet, owner compensation, depreciation of assets, property taxes, etc).
2. Variable Costs – costs that change based on your level of sales (cost of goods sold (COGS) – sales commissions, wholesale cost of the product, costs to produce your product, factory labor, etc).
3. Price – price you sell your product for. This is typically figured out by looking at the wholesale cost plus your markup, or the cost of manufacturing your product plus your markup.
Now the fun part! Let’s play with the numbers and see how this works!
Bicycle Shop has figured out the following amounts:
Fixed Costs = $10,000
Variable Costs = $110/bike
Price = $150/bike
Fixed Costs / (Price – Variable Costs) = Break Even Point (in units)
$10,000 / ($150 – $110) = Break Even Point
$10,000 / $40 = Break Even Point
Break Even Point = 250 Bikes
The bicycle shop needs to sell 250 bikes each month to break even. This will cover their costs, but the bicycle shop will not make a profit.
Fun Fact – the denominator of the equation (Price – Variable Costs) is called the Contribution Margin. The Contribution Margin is the portion of each sale that contributes to Fixed Costs.
How Does This Apply to My Business?
What if your sales change? The economy goes into a recession, new competition enters the market place, the demand for your product changes, etc. All these changes can cause your sales to drop. If this happens, you won’t be able to sell enough of your product to cover your costs. When you don’t cover your costs, your business operates at a loss.
What if your costs change? Maybe you move to a different office or store front, and as a result you face an increase in rent. Maybe your supplier or wholesaler raises their prices. Maybe business is getting busier, so you decide to hire more employees. All of these changes to your business will increase your costs. A change in costs will affect your break even point. Can your business sell enough of your product or service to cover these new costs and generate a profit?
Let’s look at the bicycle shop example above and take it a step further. We figured out the bicycle shop needs to sell 250 bikes to break even. The problem is, the bike shop owners don’t think they can sell 250 bikes in a month. Now the business owner knows they need to make some changes to avoid loosing money. What changes can the bicycle shop owners make in order to break even?
1. Price –
Increase the sales price for the bikes. If they are able to increase the price of their bikes, they can get away with selling fewer bikes to cover their costs.
What if they raise their prices to $175/bike. Their new break even point would be:
$10,000 / ($175 – $110) = Break Even Point
$10,000 / $65 = Break Even Point
Break Even Point = 154 Bikes
Now, the bicycle shop would need to sell 154 bikes each month to break even. Problem is, raising prices is not always an option.
2. Fixed Costs –
By reducing their fixed costs, the bicycle shop will have a lower break even point.
What if the bike shop owners negotiate a $1,000 discount on their monthly rent. Their new break even point would be:
$10,000 – $1,000 / ($150 – $110) = Break Even Point
$9,000 / $40 = Break Even Point
Break Even Point = 225 Bikes
Now, the bicycle shop would need to sell 225 bikes each month to break even.
3. Variable Costs –
Reducing their variable costs is another way the bike shop owners can lower their break even point.
What if they found a new supplier that sells wholesale bikes for $10 less than their original supplier. Their new break even point would be:
$10,000 / ($150 – $110 – 10) = Break Even Point
$10,000 / $150 – $100 = Break Even Point
$10,000 / $50 = Break Even Point
Break Even Point = 200 Bikes
Now, the bicycle shop only needs to sell 200 bikes each month to break even.
As a small business owner, you can see that any decision you make about pricing your product/service, the costs your business incurs, and the resulting volume that you sell are all interrelated. Knowing how many units you need to sell in order to break even allows you to set up meaningful sales goals. Sometimes making the transition from the red to the green is as simple as changing your price or costs. By performing a break even analysis, you can effectively and smartly make changes to your business to help you cover your costs. When you operate a profitable business, the result is more money in your pocket! Cool stuff and very important in running a successful business! |
Have you actually wondered what are the biggest machines that are utilized on the daily basis in different areas of industry? It seems that we will be really amazed by the hugeness of some of those creatures.
If we would like to dig a little bit deeper in this topic, all of us should give attention to the mining machines. Why? It’s because mining industry is full of such spectacular items. One of those will be Bagger 293 – intensely powerful machine, found in Germany and used in the coal mine. This specific monster is ninety six meters high and two hundred twenty five meters tall and it requires five folks to run it. What is extremely interesting, it is not really a stationary machine – Bagger 293 can move, but because of its dimension and weight the velocity is only 0,5 km/h.
A different one – named Overburden Bridge F60, happens to be the world’s largest machine that moves on its own. Here is the main part – it is sixty meters high, five hundred two meters long and 240 meters wide! Can anybody imagine such giant? It is just mind blowing, right? And as we noted previously, it can proceed on its own – the speed is merely 0,78 km/h, though. This specific machine is used as a conveyor in one of the German open-pit mines. And now something for those who like big vehicles.
Do you know the machine called Trex RH400? It’s the world’s biggest riding excavators. Wonder what size it is – here you go, ten meters high and 1000 ton weight. Seems like a “bit” heavy, right? It can move eighty-five tons of the mining output at once.
Those big machines are really spectacular, especially when we see those in action. There are plenty documentary movies on YouTube where we can see how those monsters work. |
Resource nationalism encompasses a broad range of political and economic actions taken by Governments to regulate the extraction of natural resources within their borders. Policies such as increased tariffs or export restrictions can have far-reaching economic effects on international trade. As the Governments of several developing countries consider enacting nationalistic policies, an examination of the 2014 mineral export ban in Indonesia provides an instructive example of the possible impacts of resource nationalism. Significant changes in the production and trade of unprocessed (that is, ores and concentrates) and processed (that is, refined metal) aluminum, copper, and nickel before and after the export ban form the basis of this study.
The U.S. Geological Survey (USGS) National Minerals Information Center (NMIC) tracks production and trade of mineral commodities between producer and consumer countries. Materials flow studies clarify the effects of an export ban on different mineral commodities by assessing changes in production, processing capacity, and trade. Using extensive data collection and monitoring procedures, the USGS NMIC investigated the effects of resource nationalism on the flow of mineral commodities from Indonesia to the global economy.
Lederer, G.W., 2016, Resource nationalism in Indonesia—Effects of the 2014 mineral export ban: U.S. Geological Survey Fact Sheet 2016-3072, 6 p., http://dx.doi.org/10.3133/fs20163072.
ISSN: 2327-6932 (online)
ISSN: 2327-6916 (print)
Additional publication details
|Publication Subtype||USGS Numbered Series|
|Title||Resource nationalism in Indonesia—Effects of the 2014 mineral export ban|
|Series title||Fact Sheet|
|Publisher||U.S. Geological Survey|
|Publisher location||Reston, VA|
|Contributing office(s)||National Minerals Information Center|
|Online Only (Y/N)||N|
|Additional Online Files (Y/N)||N| |
The British Gypsum mining processDec 14, 2015 . We're going underground. How Thistle Plaster is made. The process starts by extracting gypsum rock deposits from the ground using a 'JOY mine cutter' that scores away at the face of the gypsum rock. This is then broken up and transported to the surface. It's homogenising time. How Thistle Plaster is made.gypsum mining plaster,Gypsum - WikipediaLarge open pit quarries are located in many places including Fort Dodge, Iowa, which sits on one of the largest deposits of gypsum in the world, and Plaster City, California, United States, and East Kutai, Kalimantan, Indonesia. Several small mines also exist in places such as Kalannie in Western.
Plaster of Paris - Gypsum, Sustainability Technical Document . - USGCurrently USG maintains around 10 natural gypsum operations across North. America for extraction of materials for drywall and plaster products. In addition, we have a few operations currently idled and one recently closed and reclamation. In addition to mined gypsum there is 'Byproduct' gypsum. This product has several.gypsum mining plaster,Grand Rapids Gypsum Mines – Wyoming, Michigan - Atlas ObscuraDiscover Grand Rapids Gypsum Mines in Wyoming, Michigan: These sprawling mines once served as a source of plaster and today serve as a storage facility.John Frank
The most important applications of gypsum are in the production of plaster and plaster- board. The mineral forms the basis of a large industry producing a wide range of building products. However, synthetic gypsum is now more widely used in the manufacture of plas- terboard. Natural gypsum is especially suit- able for the.
Gypsum rock is mined or quarried, crushed and ground into a fine powder. In a process called calcining, the powder is heating to approximately 350 degrees F, driving off three fourths of the chemically combined water. The calcined gypsum, or hemihydrate, becomes the base for gypsum plaster, gypsum board and other.
Currently USG maintains around 10 natural gypsum operations across North. America for extraction of materials for drywall and plaster products. In addition, we have a few operations currently idled and one recently closed and reclamation. In addition to mined gypsum there is 'Byproduct' gypsum. This product has several.
Discover Grand Rapids Gypsum Mines in Wyoming, Michigan: These sprawling mines once served as a source of plaster and today serve as a storage facility.
The largest gypsum quarry in the world is located in Nova Scotia and is owned by National Gypsum. National Gypsum mines and quarries gypsum rock, crushes and grinds it, and calcines it to remove chemically bound water. It adds starch and other additives and water to form a stucco slurry. The stucco is sandwiched.
Gypsum Quarry (Altsi). The gypsum quarry is located at Altsi, Sitia Municipality, Crete and has been owned by LAVA SA since 1980. . The quarry's production capacity is 300,000 tons/year. Loading . tons/hour. The process of loading vessels with bulk gypsum complies with the requirements of the ISO 9001:2008 standard.
Massive gypsum rock forms within layers of sedimentary rock, typically found in thick beds or layers. . It is processed and used as prefabricated wallboard or as industrial or building plaster, used in cement manufacture, agriculture and other uses. . Most of the world's gypsum is produced by surface-mining operations.
Gypsum mining. The gypsum mines are one of the most central and important elements to the history of Paris. Gypsum is the main ingredient required in making plaster of Paris, and it was because of the town's large gypsum beds that Capron decided to name the town after it (Warner, 463). There are major gypsum deposits.
Gypsum/Anhydrite are produced from open-cast mines or underground mines using pillar and stall mining methods that give extraction rates of up to 75%. . When Gypsum (CaSO4,2H2O) is ground to a powder and heated at 150° to 165° C, three-quarters of its combined water is removed producing hemi-hydrate plaster.
Oct 10, 2013 . Gypsum is a white cristal-like mineral used to make certain paints, plaster of Paris and even the chalk we sometimes, if rarely, still use in the classroom. . Dynamite was often used to open new tunnels and walls were simply left standing support the mine without other reenforcement added. The city has.
In 1903 a deposit of gypsum was discovered on the site of the present Fort Dodge Plant. This was not startling because people at that time knew the Fort Dodge gypsum bed was one of the most extensive in the country. What was unusual was the Plymouth mill, the biggest plaster mill in Iowa was built there. Mining.
Feb 15, 2013 . ABSTRACT: This paper describes recent research to evaluate Gypsum plaster seal designs in the full-scale pressure test facility at Londonderry, NSW. After the Moura Number 2 Mine explosion a review of the safety of coal mine operations resulted in changes to mining legislation where ventilation control.
(1) The principal uses of gypsum are in the manufacture of surgical plasters, fertilizers, pottery, cement, chemicals, and as an extender in paints.(3) There were 46 reported mines in 2003–04. The average daily labor employed in gypsum mines is 396.(4) The present study was carried out in the gypsum mines of Rajasthan to.
Hammer Crusher Gypsum - gamereservesunited.za. hammer mill for gypsum plaster . gypsum crusher manufacturer . Cementitious materials are the production of gypsum plaster and gypsum . jaw crusher, hammer . Get More Info. image.
Jun 1, 2017 . So as DSG was increasingly used in manufacturing, the need to mine and quarry gypsum rock tailed off. Despite this, British Gypsum was far from complacent, investing in ways to improve products and operational sustainability. Investment in increasing the 'recyclability' of plasterboard brought potential.
The mine was established in 1876 by the Sub-Wealden Gypsum Co. but is now owned by British Gypsum Ltd. The bulk of mined gypsum was dispatched by rail for use in the cement industry with the remainder processed on site for the manufacture of plaster. In 1973 BGL established a plasterboard factory alongside the.
The gypsum mined in Albert County was considered to be some of the finest quality gypsum in the world. The Albert Manufacturing Company operated from 1854 until 1980. During this time the company operated a gypsum mine, four quarries, a private railway, and a plaster mill. When opened in 1854 the plaster mill was.
Gypsum global production in 2013 was 160 Mtonne. In 2013 gypsum production in South Australia was estimated at 4.4 Mtonne, which was 80% of the total Australian production. Gypsum is naturally occurring hydrated calcium sulfate (CaSO42H2O). Its main use is in the manufacture of plaster products including wall and.
Gyproc have been manufacturing in Ireland since 1936, with Gypsum rock mined from a deposit at Knocknacran, Co.Monaghan. From this raw material, an extensive range of plasterboards and plasters are manufactured at our production facility at Kingscourt, Co. Cavan. Plaster Mill. In the Plaster Mill at Kingscourt, the rock. |
Q: How can I prevent Harassment in the Workplace?
- Unwanted conduct related to sex or conduct of a sexual nature that has the purpose or effect of violating that person's dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment for them; and
- Less favourable treatment because the employee has rejected or submitted to such conduct.
- Conduct can be any unwanted verbal, non-verbal or physical conduct of a sexual nature and can include unwelcome sexual advances, touching, forms of sexual assault, sexual jokes, displaying pornographic photographs or drawings or sending emails with material of a sexual nature.
In order to prevent Harassment in your workplace, employers should:
- Carry out Equality and Diversity Training, Bullying and Harassment training - outline what is unacceptable behaviour at work and make this available to managers as well
- Set down clear policies on Equal Opportunities and Bullying and Harassment – ensure these are easily accessible and communicated out to all employees
- Investigate fully all complaints brought to your attention
- Take formal disciplinary action where appropriate |
Maryland Study Demonstrates Mid-Atlantic Offshore Wind Capacity
Offshore wind farms could generate more than enough energy to meet Maryland’s annual electricity consumption, according to a just-published study by researchers at the University of Delaware. The potential power output is nearly double current energy demands for the state, even when taking into account various limitations on where to place equipment in the Atlantic.
“Installing wind turbines far off the coast of Maryland would help the state generate large quantities of electricity while creating local jobs,” said study co-author Willett Kempton, professor of marine policy in UD’s College of Earth, Ocean, and Environment (CEOE). “Producing more electricity this way also displaces fossil fuel generation, thus reducing harmful carbon dioxide emissions and improving air quality.”
Existing Maryland law requires 18 percent of electricity to come from renewable energy sources by 2022. The law was passed before the potential supply of offshore wind was documented—no one even knew whether offshore wind was of significant size.
Offshore wind could be important to meeting Maryland’s requirement because it is more abundant and more steady than land-based Maryland wind, and is less expensive than solar power.
“If the offshore resource remains unused, meeting the state’s renewable energy requirement will be more costly to Maryland, as is true for the other mid-Atlantic coastal states,” Kempton noted.
The study found that a maximum of 7,800 wind turbines could provide an annual average output of 14,000 megawatts, equivalent to 189 percent of Maryland’s electric load. The calculation includes the use of new technology for deep-water turbines, but even using only commercially proven, shallow-water equipment, the energy generated would total 70 percent of the state’s annual demand. This is the maximum resource possible, but actual development of offshore wind would start with power plant-sized units of 80 to 150 turbines.
In determining areas of the ocean suitable for offshore wind farm development, the researchers excluded zones of possible conflict. The entire Chesapeake Bay was excluded. Fish havens and areas where birds migrate were not counted, as well as shipping routes.
The study also considered how visible the turbines would be from shore, placing the turbines eight nautical miles away so that visual impact would be minimal.
Along with the rest of the mid-Atlantic region, large shallow areas and strong winds off Maryland’s coast make it suitable for currently available offshore windmill technology.
The study found that average power output would be highest in the winter and lowest in the summer. Extra power generated during the winter months could service neighboring states, while Maryland would need to rely on other sources during a comparative shortage in the summer. Developers could position windmills to capitalize on seasonal wind direction, such as to the southwest for summer winds.
The findings were recently published in the Elsevier journal Renewable Energy. |
By Kathy Lambert
Many people are speaking out about inappropriate physical contact and remarks in the workplace. They are to be commended for their courage. It is also time to speak about more subtle forms of discrimination faced in the workplace too.
Here are several examples of more “subtle or not so subtle” situations.
A woman shares an idea and then a few minutes later a man in the room has the same or a very similar idea and his is brilliant.
Another example is in introductions. For instance, this is Mr. Jones, our regional manager, area 1 and this is Amy. Mr. Jones has his title presented but there is no mention that Amy is Amy Smith and is regional manager of area 2.
Mr. Thompson says off the top of his head that he thinks we should buy product A. Ms. Anderson says she thinks we should buy product B and presents current data to back up her idea. But the decision is unexplainably to buy A and study B. In many cases, with time permitting, both products should be studied.
At Super Bowl time there was a group conversation about the game. All of the people in the room had seen the game. But each time a woman said anything about a play, it was ignored. This exclusion or not being invited to a golf game or other activity reduces opportunities for networking.
Sometimes people know when they have been out of line and other times people say they did not realize how it was being perceived. For example, I went to a meeting one time appropriately dressed in a business suit for the meeting. But the man I was negotiating with looked at my legs the entire meeting. The meeting needed to be completed the next day. So that day I wore a long skirt to the ground. When I walked in, he laughed and said, “So you noticed that I was looking at your legs yesterday?”
Yes, I noticed and decided that we would get more done if that was not happening in the future. He knew, but had gotten away with this behavior for so long that he felt comfortable to do so. Regardless if a person knows or says they did not know, it is important to say clearly that it is not appropriate and or makes me uncomfortable. Usually there is an apology or the person says they did not realize how that was being perceived. Either way, that clarity should stop that behavior.
When I tell my granddaughters stories of discrimination and experiences years ago, they are aghast. They ask, “How could that happen?”
I remind them many women in traditionally male occupations have paved the way for them. Many of the past treatments today are prohibited by law but others are more subtle and continue. It is time for addressing both types of inappropriate treatment and to have the words and courage to address it as it happens.
As I have talked over these issues and examples with other women, they have their subtle stories and I think it is part of why there are not as many women in upper management. Their ideas and qualifications have been downplayed or dismissed or they have been excluded from networking opportunities. Speaking up on the subtle forms of discrimination helps to make a healthier workplace for us all and appreciate each person’s contributions and talents.
Kathy Lambert is a King County Council member, representing District 3. |
As you set up your new business, one of the decisions you’ll need to make is whether to use the cash or the accrual method of accounting when creating your financial statements and filing your taxes.
Today, we’ll explain the difference between the two, the pros and cons of each, and how to decide which accounting method is best for your small business.
If you would like to jump ahead to a certain section, you can do that below.
What is cash-based accounting?
What is accrual-based accounting?
Example of how a month of income might look using cash vs. accrual-based accounting
How does each method affect taxes?
How do you choose which method is best for you??
Businesses that use a cash-based accounting system recognize revenue when it hits the bank, rather than when an invoice is sent. This accounting method is popular with small businesses because it’s easy to maintain and fairly straightforward – either you have money in the bank or you don’t.
Operating on a cash basis means that existing balances in accounts receivable or accounts payable will not be included in your revenue amount, until those balances are paid.
Using this method means that since you operate your business based on the money you actually received, you’ll also pay taxes on the amount you received, rather than the amount you invoiced.
Businesses that use an accrual-based accounting system recognize revenue when it is earned, whether or not that money has already been received. Because this method gives businesses a more realistic picture of their income and expenses for a given time period, accrual-based accounting is more commonly used than cash-based accounting.
However, accrual-based accounting doesn’t take cash flow into consideration, meaning a business can appear profitable while having no money in the bank. If you choose to use this method, be very careful about monitoring your account balance or risk frequent overdraft fees.
May 2017 Transactions:
May’s profit using cash-based accounting: $2,300 ($2,500 in income minus $200 from bills) May’s profit using accrual-based accounting: $3,300 ($4,000 in income minus $700 in referral fees)
One of the biggest differences between the two accounting methods is how you report your income to the IRS at tax time. If the above example had taken place in December 2016, the accounting method you use will affect how you report that income on your tax return.
For example, using the accrual method, you would report that $4,000 invoice on your 2016 tax return (and pay taxes on it), even if you didn’t actually receive the money until January.
However, if you use the cash accounting method, you wouldn’t report that $4,000 income until you actually received it, meaning if you received it in January, you would wait to pay taxes on it until the following tax season.
Another way your chosen method of accounting affects taxes is in how you claim deductions for a given tax year. For example, if you operate on a cash basis and incurred a business expense in December 2016 but didn’t actually pay it until 2017, you wouldn’t be able to claim it as a business expense on your 2016 tax return. However, if you operate on an accrual basis, you would be able to claim it as a business expense for 2016 because you record transactions as they occur, not when the money actually moves in and out of your accounts.
You’ll choose whether to operate on a cash or accrual basis the first time you file taxes for your business. As a small business owner, you can choose to use either type of accounting method, as long as your sales total less than $5 million each year. However, businesses that make more than $5 million a year or keep an inventory of merchandise to sell to consumers are required by law to use the accrual method, so check with your accountant if you need help deciding which method is best for you.
Remember that the cash method helps you keep better track of how much money you actually have at any given moment, and the accrual method gives you more of an accurate view of your business transactions. Some businesses even choose to use a hybrid method where they use accrual accounting for inventory and cash accounting for income and expenses.
Once you choose an accounting method, you’ll need to continue filing your taxes using that method so you don’t run the risk of an audit. If you ever need to change accounting methods, file Form 3115 to ask for approval from the IRS.
ZipBooks offers affordable bookkeeping services for small business owners just like you. Learn more about our bookkeeping services today!
Brad Hanks is in charge of Growth at ZipBooks. |
Professor of English, East Carolina University
with Carolyn R. Miller and Stephen Carradini
North Carolina State University
Posted September 2016
The field of technical communication is concerned with how professionals communicate complex information with specialist and nonspecialist users in order to solve practical problems, often using communication technologies, multi-modal documents, or complex documents. These professionals may be identified primarily by their roles as communicators, with varying job titles and responsibilities but with primary expertise in communication (see Brumberger and Lauer 2015); or by their roles in another professional area, such as engineering, computing, management, accounting, criminal justice, and healthcare (“workplace communication”). Although the roles and professional identities are rather distinct, the communication tasks and products with which they work are often shared. This overview will give some attention to both domains of technical communication.
Like all communicators, technical communicators make use of shared textual conventions applied in recognizably similar situations to accomplish their communication goals. Those shared conventions, or genres, simplify the technical communicator’s work by constraining the range of possibilities in a given communication situation, and they can encourage innovation by helping technical communicators understand the goals of a text and envision a range of ways to achieve those goals.
Technical communicators face special challenges:
- The subject matter of technical communication is specialized, difficult, or esoteric.
- Users are motivated to accomplish tasks and solve problems, not simply to acquire information.
- The same information may be consumed by users with a wide variety of unpredictable characteristics and needs.
- Many technical documents are created collaboratively or are assembled from preexisting documents.
- Technical communicators are often not experts in the subjects that they must write about.
- The text’s creators and users often do not share the same expertise.
- Technical communication often depends upon complex information technologies for construction and dissemination.
- Much technical communication is high stakes, involving matters of risk or safety.
- Many technical messages have financial, legal, and ethical ramifications.
- Technical communication often occurs in complex organizational or institutional contexts.
Genre has become a mission-critical concept for the field of technical communication because it offers resources for addressing each of these challenges.
Technical communication in fact made among the earliest substantial contributions to North American rhetorical genre research, probably because of its insistent focus on identifiable rhetorical contexts beyond the classroom, with especially active research in Canada by those concerned with workplace communication (Schryer 2002). In their introduction to genre studies, Bawarshi and Reiff include an extensive review of genre research in workplace and professional contexts (2010).
A search of the four major journals in the field shows both early and sustained attention to genre, displayed in the chart below:
Interestingly, these data diverge from the finding in Dayton and Bernhardt’s (2004) survey of members of the Association for Teachers of Technical Writing that genres and genre theory were among the topics of below average interest for future issues of the association’s journal, Technical Communication Quarterly.
Formal and Rhetorical Perspectives on Genre
Most practitioners of technical communication understand “genres” as the set of document types commonly produced in their workplace: memos and letters, grant proposals, procedure manuals, instructions, progress reports, annual reports, and so forth. In the workplace, knowledge of the most important or frequently used genres enables employees to operate efficiently and reliably. Knowledge of the formal features of common document types and recognition of the moments when they are called for helps accomplish a great deal of practical work. For this reason, textbooks and handbooks provide models and guidelines for producing many common genres, and software developers have created templates that guide the production of many common technical genres (e.g. Gurak and Hocks 2009). Bibliographic resources are often organized in part around genres adopted largely from the textbook tradition, such as those listed above (e.g. Belanger 2005, Moran and Journet 1985, Sides 1989).
While recognizing the practical value of this notion of genres as comparatively stable forms, most scholars of technical communication have followed rhetorical theorists, beginning with Miller (1984), who understand genre as a dynamic and socially rooted concept: the appreciation of exploitable regularities in practical communication situations. Technical communication scholars and rhetorical scholars agree that genre is based in communities or other social groups; has some relationship to activities, or getting work done in the world (not just a taxonomic scheme for texts); and enables innovation and creativity.
Because it is a comparatively young field of study, technical communication has drawn opportunistically upon fundamental theoretical perspectives from rhetorical theory (in the form of rhetorical genre studies and Bakhtinian genre theory), activity theory, sociology, and discourse analysis and linguistics. Although its approach to genre shares much with these disciplines, the characteristic challenges of technical communication noted above have led scholars to revise rhetorical and other approaches to genre to address its unique objects and problems.
Genre, Activity Theory, and Socialization
One of the challenges of theorizing genre in technical communication is that the “texts” produced in these contexts are so mutable. Since much workplace writing occurs in rapidly changing contexts (and often in response to technological change), the field must account for how genres form and change over time. Indeed, Schryer’s (1993, 200) insight that genres are only “stabilized-for-now,” always subject to the pressures of changing situations, derived from her study of a change in the formal conventions of veterinary medical record-keeping. Russell (1997) reviews several studies that look at the coevolution of genres and professional contexts in a variety of technical disciplines, ranging from the evolution of psychiatry’s Diagnostic and Statistical Manual manual (McCarthy and Gerring 1994, McCarthy 1991) to the genre of the request for proposals (RFP) and related genres in defense contracting (Van Nostrand 1994). Russell (1997, 226) observes that this type of research arose as technical communication turned its attention toward the social dynamics of workplaces (and their productions, including genres) and away from formalist perspectives on genre.
The notion of text form as simply a vehicle for a text’s message has been thoroughly overturned in technical communication, where the formal characteristics of a text often have great bearing upon the text’s meaning, effects, and uptake. For example, since many users come to technical documents with little prior experience or knowledge, one of the technical writer’s tasks is to assist readers in using the document by building in features that “teach” the user to consume the document as it is being consumed. Navigational tools, interface features, characteristics of the document’s physical form, and other elements all support users’ uptake of a novel document. Technical documents differ in material, typography, design, information structure, platform, mode, and medium. The discipline’s understanding of genre must accommodate this great heterogeneity while still helping scholars to understand regularities of document production and helping practitioners to do their work effectively.
Many technical communication theorists favor the highly situated view of genre that activity theory offers. As Russell (1997) observes, when analyzing the genre of reports, for instance, “one looks closely at one specific activity system, and those with which it interacts, to find regularities in the ways people in that activity system write reports, and the history of their language use” (226). Though genres are understood in terms of regularities, these are mutable, situated regularities, and their idiosyncrasies are just as important as their relative stability. The degree of regularity of genred activity varies, and our understanding of discursive practices must take into account not only technical communicators’ uses of the most stable genres in their repertoires, but also their uses of “those discursive objects that are more emergent, of unknown shape and dynamic content, drawn from a variety of sources” such as the “micro-discursive operations” that occur outside of identifiable genres but that coordinate work across time and space and between people (Swarts 2008, 303, 304), for example, reading a data display, responding to a query, or, from Spinuzzi (2003), Post-it notes used to transfer data from one workspace to another. In a similar vein, Kain (2005) shows how genre functions in a situation that is nonroutine and nonrecurring through its instrumental, metacommunicative, and sociopolitical functions.
A related area of attention has been how socialization into a discipline, profession, or workplace influences writers’ uptake of discursive conventions and other types of performance, and, conversely, how exposure to these conventions and performances contributes to disciplinary socialization or enculturation. Though these problems are not unique to technical communication, they are special problems in this area because technical communicators are expected to take up new roles and genres very quickly (and in many cases transition quickly from role to role) in the contemporary technical workplace. As Artemeva notes in emphasizing the relevance of activity theory and situated learning theory to genre socialization, “the development of a professional identity is inextricably linked to participating in the workplace genres and ‘learning one’s professional location in the power relations of institutional life’” (2008, 61, quoting Paré 2002). See also Russell (2007), Winsor (1999), and Zachry (2000).
Research on Specific Technical Communication Genres
Numerous studies have examined the genres unique to particular work environments, most typically treating them as evolved forms, tools, or mediational resources that both reflect and shape their contexts. In fact, along with studies of how learners take up common technical genres (in school or in the workplace), studies of niche genres are probably the most common examples of scholarship on genre and technical communication. The data from the four major journals mentioned above reveal that the most frequently discussed genres have been reports, both technical and scientific, proposals, academic articles, and computer documentation, while there is a plethora of genres discussed only once or twice, including electronic mail (Zucchermaglio and Talamo 2003), medical case presentations (Spafford et al. 2006, Schryer and Spoel 2005), web resumes (Killoran 2006), design critiques (Dannels 2011, Dannels and Martin 2008), presentence investigation reports (Converse 2012), corporate promotional videos (Yli-Jokipii 1998), patent drawings (Donnell 2005), call-center communication (Xu et al. 2010), paper sewing patterns (Durack 2003), clinical protocols (Bell, Walch, and Katz 2000), and gameplay genres (Sherlock 2009). Many studies use a case approach, scrutinizing a small number of examples of a noteworthy genre, although Graham et al. (2015) advocate statistical methods for analyzing large corpora.
The goals of these studies differ. Many use the insights of genre theory to uncover notable features of the genres (e.g. Converse 2012, Xu et al. 2010, Sherlock 2009); others use the case to advance theory (Dannels 2011, Swarts 2006). As this list also reveals, technical communication scholars are interested in both high-visibility and formally structured genres (reports, resumes) and more esoteric or ephemeral genres (critiques, emails), as well as objects that in some frameworks might not be considered genres (gameplay); they are interested not only in written but also in oral (medical case presentations, call-center communication), visual (patent drawings, sewing patterns), and electronic genres (web resumes, promotional videos).
Genre in Larger Networks and Technological Systems
In most technical and workplace settings, genres operate not in isolation but in combination with other genres, and in rhetorical situations that are themselves complex and sometimes ill defined. To account for these features, scholars have proposed several generative models: genre systems (Bazerman 1994, Yates and Orlikowski 2002), genre sets (Devitt 1991), genre repertoires (Orlikowski and Yates 1994), and genre ecologies (Spinuzzi 2004, Spinuzzi and Zachry 2000). The genre ecologies perspective, an outgrowth of activity theory, has been useful for describing the complex dynamics of networked, globalized workplaces and of digital genres. In Spinuzzi’s (2004) formulation, a genre ecology in any workplace or activity network is a set of relations of mediation between workers, genres, and activities; these relations are contingent and decentralized, and they achieve relative, if temporary, stability. Agency and cognition are distributed across the ecology, across workers, genres, and other tools. The activity is fundamentally shaped by the genres that enact it, so that it is impossible to separate the social activity (say, fulfilling purchase orders) from the genre (the purchase order) that one uses to do that activity, including the “purchase order” form that structures the act.
Reflecting the distances in time and place that often characterize rhetorical situations in technical communication, “repeatability” is a key concern, since technical communicators often must coordinate with previous work and material produced in other locations. To create consistency across time and space, genre standardization is achieved with style sheets, templates, and, increasingly, more transformative genre-replicating technologies such as single-sourcing and content management systems. These systems help organizations (and their members) coordinate their communications, which in turn helps them to make communication decisions repeatable and genres more stable.
Recent scholarship in technical communication has focused on these genre-standardization approaches, which have problematized notions of authorship, rhetoric, form and content, information, and genre itself. As Swarts (2010) notes, although single-sourcing systems purportedly “cast reusable content as contextless and rhetorically neutral,” they “mask the complexity of the rhetorical relationships negotiated by reused text” (158). This suggests the need for systems and techniques that better support users’ actual strategies for reuse, as well as further study of users’ adaptation of these resources in practice. Clark (2007) anticipates a great change in the discipline’s understanding of genre as content management approaches take hold: genres may increasingly be conceptualized as “outputs” defined by their fixed, technologically applied characteristics, and writers may find themselves producing material divorced from contexts of use, or, conversely, designing outputs suitable for a variety of possible content.
Other recent research also reflects the powerful presence of technological networks in technical communication. Swarts’s (2015) study of the genre of help documentation notes that because networks produce uncertain and ill-defined problems, task-based documentation can no longer adequately anticipate users’ needs and tasks, and users are turning to online help forums for solutions. Such forums offer a “theater of help(ing)” (165) that technical communicators must learn to manage and facilitate. Likewise, Selber (2010) focuses on the online instruction set, suggesting that this genre of technical communication “can be seen as central to an age of social media” because of an “online participatory culture that encourages involvement, collaboration, and information exchange” (99). And more broadly still, Lewis (2016) argues that “invisible rhetorical genres operating at macroscopic levels of scale are central to shaping individual and communal activity in digital spaces” (5). His study of the content management system and bittorrent tracker used by an online community shows how designers “relied on their communal membership to design effective navigation tools, interfaces, and information architectures” that “map[ped] the social motives and communal desires” of the user community, thus embedding generic social actions within the technology through a form of user-experience design that is often used in more formal contexts (22). See also Carliner and Boswood (2004).
Teaching and Learning Technical Communication Genres
Much of the pedagogical research concerns workplace communication, focused on what are usually known as “service” courses for advanced and graduate students in technical, scientific, and professional curricula, not for those who anticipate becoming technical communication professionals. In either case, the point of teaching students to employ genres in an academic setting is so that they will be better equipped to employ genres in the workplace and other professional contexts. Cook’s (2003) survey of technical communication courses inventories genres assigned in 197 courses; the five most frequently assigned were the oral presentation, proposal, memo, correspondence, and progress report, with other types of reports nearly as frequent.
The early technical writing pedagogies described by Connors (1982), Kynell-Hunt (2000), and others treated technical contexts and forms more directly than did most of the coeval composition pedagogies, encouraging reproduction of technical and workplace genres as relatively fixed textual forms. In her discussion of the formal-textual and social context/discourse community perspectives in genre research, Luzón (2005) notes that each perspective has given rise to a distinct pedagogical approach in technical communication. The textual perspective involves explicit teaching of linguistic features, whereas the social perspective places students within a social context to address a rhetorical situation. According to Luzón, both paradigms are active in technical communication pedagogy, and she cites examples of both. Even within the social approach, pedagogical expedience often leads to formal-textual pedagogy, which tends to prioritize the most stable genres while deemphasizing the informal local rhetorical practices and social negotiations that add to a professional community’s genre work. Henze notes the particular relevance of the social perspective and the genre approach for teaching technical communication and provides a review of several models for teaching genre, as well as specific classroom activities and assignments (Forthcoming).
In technical communication, the problem of transfer, long an issue in Composition Studies, points to the transition from the academy to the workplace, and much pedagogical research focuses here, highlighting the differences between what can be learned in a classroom and what is needed on the job; foundational studies here include Beaufort (1999), Dias et al. (1999), Lingard and Haber (2002), and Smart and Brown (2008). In an analysis of four engineering students’ transition from the classroom through the engineering curriculum and into their careers, Artemeva (2009) demonstrates that “learning professional genres does not occur in a smooth, uninterrupted way” (171). She argues that “some ingredients of genre knowledge can be taught in a classroom context,” but “for the individuals to be able to apply this knowledge successfully, it needs to be complemented by other genre knowledge ingredients accumulated elsewhere” (173), such as “cultural capital,” “domain content expertise,” “private intention,” and “workplace experiences” (172).
Perhaps for this reason, much recent pedagogical work has focused on nontraditional learning environments and contexts such as internships (Bourelle 2014), situated learning (Artemeva, Logie, and St-Martin 1999, Blakeslee 2001), simulations (Freedman, Adam, and Smart 1994), and client-based pedagogy (Wojahn et al. 2001). This area of pedagogical scholarship offers strategies for fostering students’ uptake of workplace genres in authentic or quasi-authentic contexts and for designing curricula that support students as they enter workplaces or interact with clients beyond the classroom. In these approaches, the contexts, ecologies, and communities take center stage, and workplace genres are addressed as artifacts of these communities or components of the workplace activity system.
Note that the customary term “user” rather than “reader” or “audience” is indicative of the instrumental focus of technical communication.
Data gathered by Stephen Carradini; includes 246 articles that use genre theory or research in some substantive way, not every article that mentions the word “genre.”
Artemeva, Natalia. 2008. "Approaches to Learning Genres: A Bibliographical Essay." In Rhetorical Genre Studies and Beyond, edited by Natasha Artemeva and Aviva Freedman, 9–99. Winnipeg, Manitoba: Inkshed.
Artemeva, Natalia. 2009. "Stories of Becoming: A Study of Novice Engineers Learning Genres of Their Profession." In Genre in a Changing World, edited by Charles Bazerman, Adair Bonini and Débora Figueiredo, 158–178. Fort Collins, CO: WAC Clearinghouse and Parlor Press.
Artemeva, Natasha, Susan Logie, and Jennie St-Martin. 1999. "From Page to Stage: How Theories of Genre and Situated Learning Help Introduce Engineering Students to Discipline-Specific Communication." Technical Communication Quarterly 8 (3): 301-316. doi: 10.1080/10572259909364670.
Bawarshi, Anis S., and Mary Jo Reiff. 2010. "Genre Research in Workplace and Professional Contexts." In Genre: An Introduction to History, Theory, Research, and Pedagogy, edited by Charles Bazerman, 132–150. West Lafayette, IN: Parlor Press and WAC Clearinghouse.
Bazerman, Charles. 1994. "Systems of Genres and the Enactment of Social Intentions." In Genre and the New Rhetoric, edited by Aviva Freedman and Peter Medway, 79–101. London: Taylor and Francis.
Beaufort, Anne. 1999. Writing in the Real World: Making the Transition from School to Work. New York: Teachers College Press.
Belanger, Sandra E. 2005. Business and Technical Communication: An Annotated Guide to Sources, Skills, and Samples. Westport, CT: Praeger.
Bell, Heather D., Kathleen A. Walch, and Steven B. Katz. 2000. "Aristotle's Pharmacy: The Medical Rhetoric of a Clinical Protocol in the Drug Development Process." Technical Communication Quarterly 9 (3): 249-269. doi: 10.1080/10572250009364699.
Blakeslee, Ann M. 2001. "Bridging the Workplace and the Academy: Teaching Professional Genres through Classroom-Workplace Collaborations." Technical Communication Quarterly 10 (2): 169-192. doi: 10.1207/s15427625tcq1002_4.
Bourelle, Tiffany. 2014. "New Perspectives on the Technical Communication Internship: Professionalism in the Workplace." Journal of Technical Writing & Communication 44 (2): 171–189. doi: 10.2190/TW.44.2.d.
Brumberger, Eva, and Claire Lauer. 2015. "The Evolution of Technical Communication: An Analysis of Industry Job Postings." Technical Communication 62 (4): 224-243.
Carliner, Saul, and Timothy Boswood. 2004. "Genre: A Useful Construct for Reseaching Online Communication for the Workplace." Information Design Journal & Document Design 12 (2): 124–136.
Clark, Dave. 2007. "Content Management and the Separation of Presentation and Content." Technical Communication Quarterly 17 (1): 35–60. doi: 10.1080/10572250701588624.
Connors, Robert J. 1982. "The Rise of Technical Writing Instruction in America." Journal of Technical Writing and Communication 12 (4): 329–352.
Converse, Caren Wakerman. 2012. "Unpoetic Justice: Ideology and the Individual in the Genre of the Presentence Investigation." Journal of Business and Technical Communication 26 (4): 442-478.
Cook, Kelli Cargile. 2003. "How Much Is Enough? The Assessment of Student Work in Technical Communication Courses." Technical Communication Quarterly 12 (1): 47-65. doi: 10.1207/s15427625tcq1201_4.
Dannels, Deanna P. 2011. "Relational Genre Knowledge and the Online Design Critique: Relational Authenticity in Preprofessional Genre Learning." Journal of Business and Technical Communication 25 (1): 3-35.
Dannels, Deanna P., and Kelly Norris Martin. 2008. "Critiquing Critiques: A Genre Analysis of Feedback across Novice to Expert Design Studios." Journal of Business and Technical Communication 22 (2): 135-159.
Dayton, David, and Stephen A. Bernhardt. 2004. "Results of a Survey , of Attw Members, 2003." Technical Communication Quarterly 13 (1): 13-43. doi: 10.1207/S15427625TCQ1301_5.
Devitt, Amy J. 1991. "Intertextuality in Tax Accounting: Generic, Referential, and Functional." In Textual Dynamics of the Professions: Historical and Contemporary Studies of Writing in Professional Communities, edited by Charles Bazerman and James Paradis, 336–355. Madison, WI: University of Wisconsin Press.
Dias, Patrick, Aviva Freedman, Peter Medway, and Anthony Paré. 1999. Worlds Apart : Acting and Writing in Academic and Workplace Contexts, Rhetoric, Knowledge, and Society. Mahwah, NJ: Routledge.
Donnell, Jeffrey. 2005. "Illustration and Language in Technical Communication." Journal of Technical Writing and Communication 35 (3): 239-271. doi: 10.2190/HY3L-WN98-QC5R-P3B3.
Durack, Katherine T. 2003. "Observations on Entrepreneurship, Instructional Texts, and Personal Interaction." Journal of Technical Writing and Communication 33 (2): 87-109. doi: 10.2190/Y5VH-HAD2-PYT1-TR1N.
Freedman, Aviva, Christine Adam, and Graham Smart. 1994. "Wearing Suits to Class: Simulating Genres and Simulations as Genre." Written Communication 11 (2): 193–226.
Graham, S. Scott, Sang-Yeon Kim, Danielle M. DeVasto, and William Keith. 2015. "Statistical Genre Analysis: Toward Big Data Methodologies in Technical Communication." Technical Communication Quarterly 24 (1): 70–104. doi: 10.1080/10572252.2015.975955.
Gurak, Laura J., and Mary E. Hocks. 2009. The Technical Communication Handbook. New York: Pearson Longman.
Henze, Brent. Forthcoming. "Teaching Genre in Professional and Technical Communication." In Teaching Professional and Technical Communication, edited by Tracy Bridgeford. Logan, UT: Utah State University Pres.
Kain, Donna J. 2005. "Constructing Genre: A Threefold Typology." Technical Communication Quarterly 14 (4): 375-409. doi: 10.1207/s15427625tcq1404_2.
Killoran, John B. 2006. "Self-Published Web Résumés: Their Purposes and Their Genre Systems." Journal of Business and Technical Communication 20 (4): 425-459.
Kynell-Hunt, Teresa. 2000. Writing in a Milieu of Utility: The Move to Technical Communication in American Engineering Programs, 1850–1950. 2nd ed. Stamford, CT: Ablex.
Lewis, Justin. 2016. "Content Management Systems, Bittorrent Trackers, and Large-Scale Rhetorical Genres." Journal of Technical Writing & Communication 46 (1): 4–26. doi: 10.1177/0047281615600634.
Lingard, Lorelei, and Richard Haber. 2002. "Learning Medical Talk: How the Apprenticeship Complicates Current Explicit/Tacit Debates in Genre Instruction." In The Rhetoric and Ideology of Genre: Strategies for Stability and Change, edited by Richard Coe, Lorelei Lingard and Tatiana Teslenko, 155–170. Cresskill, NJ: Hampton Press.
Luzón, María José. 2005. "Genre Analysis in Technical Communication." IEEE Transactions on Professional Communication 48 (3): 285-295. doi: 10.1109/TPC.2005.853937.
McCarthy, Lucille Parkinson. 1991. "A Psychiatrist Using Dsm-III: The Influence of a Charter Document in Psychiatry." In Textual Dynamics of the Professions: Historical and Contemporary Studies of Writing in Professional Communities, edited by Charles Bazerman and James Paradis, 358–378. Madison, WI: University of Wisconsin Press.
McCarthy, Lucille Parkinson, and Joan P. Gerring. 1994. "Revising Psychiatry's Charter Document: DSM-IV." Written Communication 11: 147–92.
Miller, Carolyn R. 1984. "Genre as Social Action." Quarterly Journal of Speech 70 (2): 151–167. doi: 10.1080/00335638409383686
Moran, Michael G., and Debra Journet, eds. 1985. Research in Technical Communication: A Bibliographic Sourcebook. Westport, CT: Greenwood Press.
Orlikowski, Wanda J., and JoAnne Yates. 1994. "Genre Repertoire: The Structuring of Communicative Practices in Organizations." Administrative Science Quarterly 39 (4): 541–574.
Paré, Anthony. 2002. "Genre and Identity: Individuals, Institutions, and Ideology." In The Rhetoric and Ideology of Genre: Strategies for Stability and Change, edited by Richard Coe, Lorelei Lingard and Tatiana Teslenko, 57–71. Cresskill, NJ: Hampton Press.
Russell, David R. 1997. "Rethinking Genre in School and Society: An Activity Theory Analysis." Written Communication 14 (4): 504–554. doi: 10.1177/0741088397014004004.
Russell, David R. 2007. "Rethinking the Articulation between Business and Technical Communication and Writing in the Disciplines: Useful Avenues for Teaching and Research." Journal of Business and Technical Communication 21 (3): 248-277.
Schryer, Catherine F. 1993. "Records as Genre." Written Communication 10 (2): 200–234.
Schryer, Catherine F. 2002. "Genre and Power: A Chronotopic Analysis." In The Rhetoric and Ideology of Genre: Strategies for Stability and Change, edited by Richard Coe, Lorelei Lingard and Tatiana Teslenko, 73–102. Cresskill, NJ: Hampton Press.
Schryer, Catherine F., and Philippa Spoel. 2005. "Genre Theory, Health-Care Discourse, and Professional Identity Formation." Journal of Business and Technical Communication 19 (3): 249-278.
Selber, Stuart A. 2010. "A Rhetoric of Electronic Instruction Sets." Technical Communication Quarterly 19 (2): 95-117. doi: 10.1080/10572250903559340.
Sherlock, Lee. 2009. "Genre, Activity, and Collaborative Work and Play in World of Warcraft: Places and Problems of Open Systems in Online Gaming." Journal of Business and Technical Communication 23 (3): 263-293.
Sides, Charles H., ed. 1989. Technical and Business Communication: Bibliographic Essays for Teachers and Corporate Trainers. Urbala, IL, and Washington, DC: National Council of Teachers of English and Society for Technical Communication.
Smart, Graham, and Nicole Brown. 2008. "Developing a 'Discursive Gaze'': Participatory Action Research with Student Interns Encountering New Genres in the Activity of the Workplace." In Rhetorical Genre Studies and Beyond, edited by Natasha Artemeva and Aviva Freedman, 241–279. Winnipeg, Manitoba: Inkshed.
Spafford, Marlee, Catherine F. Schryer, Marcellina Mian, and Lorelei Lingard. 2006. "Look Who's Talking: Teaching and Learning Using the Genre of Medical Case Presentations." Journal of Business & Technical Communication 20 (2): 121–158.
Spinuzzi, Clay. 2003. Tracing Genres through Organizations: A Sociocultural Approach to Information. Edited by Bonnie Nardi, Viktor Kaptelinin and Kirsten Foot, Acting with Technology. Cambridge, MA: MIT Press.
Spinuzzi, Clay. 2004. "Four Ways to Investigate Assemblages of Texts: Genre Sets, Systems, Repertoires, and Ecologies." In 22nd Annual International Conference on Design of Communication: The Engineering of Quality Documentation, 110–116. Memphis, TN: Association for Computing Machinery.
Spinuzzi, Clay, and Mark Zachry. 2000. "Genre Ecologies: An Open-System Approach to Understanding and Constructing Documentation." ACM Journal of Computer Documentation 24 (3): 169–181.
Swarts, J. 2008. "Information Technologies as Discursive Agents: Methodological Implications for the Empirical Study of Knowledge Work." Journal of Technical Writing and Communication 38 (4): 301-329. doi: 10.2190/TW.38.4.b.
Swarts, Jason. 2006. "Coherent Fragments: The Problem of Mobility and Genred Information." Written Communication 23 (2): 173–201.
Swarts, Jason. 2010. "Recycled Writing: Assembling Actor Networks from Reusable Content." Journal of Business and Technical Communication 24 (2): 127-163.
Swarts, Jason. 2015. "Help Is in the Helping: An Evaluation of Help Documentation in a Networked Age." Technical Communication Quarterly 24 (2): 164–187. doi: 10.1080/10572252.2015.1001298.
Van Nostrand, A. D. 1994. "A Genre Map of R&D Knowledge Production for the U.S. Department of Defense." In Genre and the New Rhetoric, edited by Aviva Freedman and Peter Medway, 133–145. London: Taylor and Francis.
Winsor, Dorothy A. 1999. "Genre and Activity Systems: The Role of Documentation in Maintaining and Changing Engineering Activity Systems." Written Communication 16 (2): 200–224.
Wojahn, Patricia, Julie Dyke, Linda Ann Riley, Edward Hensel, and Stuart C. Brown. 2001. "Blurring Boundaries between Technical Communication and Engineering: Challenges of a Multidisciplinary, Client-Based Pedagogy." Technical Communication Quarterly 10 (2): 129-148. doi: 10.1207/s15427625tcq1002_2.
Xu, Xunfeng, Yan Wang, Gail Forey, and Lan Li. 2010. "Analyzing the Genre Structure of Chinese Call-Center Communication." Journal of Business and Technical Communication 24 (4): 445-475.
Yates, JoAnne, and Wanda Orlikowski. 2002. "Genre Systems: Structuring Interaction through Communicative Norms." Journal of Business Communication 39 (1): 13–35.
Yli-Jokipii, Hilkka M. . 1998. "The Representation of Leisure in Corporate Publicity Material: The Case of a Finnish Pine Construction Company." Technical Communication Quarterly 7 (3): 259-270. doi: 10.1080/10572259809364630.
Zachry, Mark. 2000. "Communicative Practices in the Workplace: A Historical Examination of Genre Development." Journal of Technical Writing and Communication 30 (1): 57-79. doi: 10.2190/UMGD-LGR6-QJUE-CJHY.
Zucchermaglio, Cristina, and Alessandra Talamo. 2003. "The Development of a Virtual Community of Practices Using Electronic Mail and Communicative Genres." Journal of Business and Technical Communication 17 (3): 259-284. |
Recycled Nylon has the same benefits as recycled polyester: It diverts waste from landfills and its production uses much fewer resources than virgin nylon (including water, energy and fossil fuel).
A large part of the recycled nylon produced comes from old fishing nets. This is a great solution to divert garbage from the ocean. It also comes from nylon carpets, tights, etc.
Recycling nylon is still more expensive than new nylon, but it has many environmental advantages.
A lot of research is currently being conducted to improve the quality and reduce the costs of the recycling process. |
Let us check it out what is Supply chain management its definition and meaning. So let us find out some information on SCM to know more about it.
Supply chain management or SCM is an effective tool for improvement in business process. Supply chain management ends at the point of consumption and begins with the source of supply.SCM covers the flow of materials and information from suppliers, through a number of value adding processes and distribution channels , to the customer. Find out more on advantages and disadvantages of supply Chain Management. |
Hydroxyethyl cellulose (HEC) is a non-ionic cellulose ether made through a series of chemical processes, with the natural polymer celluloses as raw materials. It is odorless, tasteless, and non-toxic in the shape of white to off-white powders or granules. It can be dissolved in water to form a transparent viscous solution. It has thickening, adhesion, dispersion, emulsification, film-formation, suspension, absorption, surface activity, salt tolerance, water retention, providing protective colloids and other properties. Hydroxyethyl cellulose (HEC) can be used in building materials, paints industry, petrochemicals, synthetic resin, ceramic industry, pharmaceutical, food, textile, agriculture, cosmetics, tobacco, ink, papermaking and other industries.
Hydroxy ethyl cellulose can be used as a non-ionic surface active agent. In addition to thickening, suspending, adhesion, emulsifying, film-forming, dispersing, water-retaining and providing protective colloid properties, but also has the following properties.
1. Hydroxyethyl cellulose is soluble in hot or cold water, does not precipitate by heat or boiling, and enables it to have a wide range of solubility and viscosity characteristics, as well as non-thermal gelation;
- 1. It’s non-ionic itself and can coexist with a wide range of other water-soluble polymers, surfactants, and salts, a fine colloidal thickener for the solution containing a high concentration of electrolytes;
3. Its water retention capacity is twice as that of methyl cellulose, and it has better flow-regulating property;
4. The product is stable in viscosity and prevented from mildew. It enables the paint to have good can-opening effects and better leveling properties in construction.
Physical and Chemical Properties
Hydroxyethyl cellulose is soluble in both cold and hot water, but under normal circumstances does not dissolve in most organic solvents. When the pH value is within the range of 2-12, the change in viscosity is small, but if beyond this range, the viscosity will decrease. The surface-treated Hydroxyethyl cellulose can be dispersed in cold water without agglomeration, but dissolution rate is slower, and generally it requires about 30 minutes. With heat or adjusting the pH value to 8-10, it can be rapidly dissolved.
- 27 Jan 2016Гидроксипропилметилцеллюлоза
- 26 Jan 2016Stability of Cellulose Ethers
- 26 Jan 2016Application of Cellulose Ether in Medicine Development |
Subsets and Splits