text
stringlengths
181
622k
If you ever have the opportunity to tour a quarry, take it! If you have ever wondered what goes on inside a rock quarry, the simple explanation is this is where big rocks come from. You need to realize that no two rock quarries are the same. They operate pretty much the same, but depending on where they are located and what kind of stone is being quarried, can make them operate just a little different. A rock quarry is mining for big rocks that are eventually made into smaller rocks or sand. Then they are sold to builders or contractors where they are made into bridges, highways and roadways, houses, and shopping malls. You’ll also find the rock in your church or your kid’s schools and even in the office building where you work. First Things First Before a rock quarry can be started, the geologists need to find the right place. They survey the land, and the site is designed. Then the legal part comes with licenses, permits and other “red tape” issues are handled. Next, the company will purchase equipment to build roadways to the facility so they can build the processing plant. The Good Environmental Neighbor Because rock quarries are built close to neighborhoods, it is important that the quarry is built with buffer zones around it. This is necessary to minimize the noise and not disturb the neighborhoods. The entrance is usually landscaped so that it looks natural and special water systems are installed. The water used to process the stone is recycled. The land must be cleared to get where the rock is located below the surface. Often, the quarry sites will donate the cleared materials to the surrounding community. Then next comes the blasting and drilling that is needed to get the rock out of the ground. The process used will depend on how much rock is needed, how it is to be broken and other factors. The blasting process takes only a few seconds and is monitored for the sound and the vibrations that are traveling into the surrounding community. Pit loaders remove the large pieces and dump them into haul trucks.
BOD mg/L tank May 8 2,,1 3,,2 3,Thickener May 29 3,820 DehydratorDewatered cakeEfl. 2 definition of wastewater treatment and industrial wastewater treatment wastewater treatment is the collection, processing, treating, recycling or disposal of waste material in water (usually waste produced by human activities) in order to reduce their effect on human health and the environment. Download Industrial Wastewater Treatment books, Industries use a large number of substances in their manufacturing processes and also generate solid residues, liquid effluents and gaseous emissions as wastes. The Department is authorized by the U. Moved Permanently. 1 characteristics of urban waste water 5 2. PDF | one of the most critical areas is the treatment of municipal and industrial wastewater. In, all the purifi cation equipment was fully compliant, with a 100 % performance compliance rate for purifi cation works. Exercise Chapter 3 BTO3124 1. Scope This course is industrial wastewater treatment pdf designed to train operators in the practical aspects of operating and maintaining industrial wastewater treatment plants, emphasizing safe practices and procedures. Ken-Rich Chemical Production Sdn Bhd • INDUSTRIAL EFFLUENT TREATMENT SYSTEMS (IETS) industrial wastewater treatment pdf ANY FACILITY. Handbook of Water and Wastewater Treatment Plant Operations. This industrial wastewater treatment pdf international industrial wastewater treatment pdf version is comprised by six textbooks giving pdf a state-of-the-art presentation of the science and technology of biological wastewater treatment. View Waste Water Treatment industrial wastewater treatment pdf Plant Methodology. wastewater is defined as wastewater from household water use, while industrial wastewater is from industrial practices only. Industrial waste streams vary considerably in both level of contaminants (pH, total suspended solids, etc. 1 treatment of waste water 1 1. industrial wastewater treatment pdf 1 1 Management of Industrial Wastes: Solids, Liquids, and Gases The approach used to develop systems to treat and dispose of industrial wastes is distinctly. Industrial wastewater treatment describes the processes used for treating wastewater that is produced by industrial wastewater treatment pdf industries as an undesirable by-product. 2 parameters by which waste water is measured 5 2. This chapter provides an overview of the process of wastewater treatment and provides information appropriate for municipal leaders, the general public and operators. Here are a few examples of our past Industrial Wastewater projects. Industrial Project Profiles. The document has moved here. Most industries produce some wastewater. The basic function of wastewater treatment is to speed up the natural processes by which water is purified. Recent trends industrial wastewater treatment pdf have been to industrial wastewater treatment pdf minimize such production or to re. Download Industrial Wastewater Treatment books, Industries use a large number of substances in their manufacturing processes and also generate solid residues, liquid effluents and gaseous emissions as wastes. For the first half of the 20th century, pollution in the Nation’s urban waterways resulted in frequent occurrences of low industrial wastewater treatment pdf dissolved oxygen, fish kills,. Septic Systems: industrial wastewater treatment pdf A Practical Alternative for Small Communities (pdf)- pdf Pipeline newsletter, Summer - When properly designed, installed, and maintained, septic systems industrial wastewater treatment pdf can be the most cost-effective and efficient method of wastewater treatment a homeowner can choose. Industrial wastewater treatment, for example, has to confront important challenges concerning both cost management of treatment plants and fulfillment of regulations. In domestic wastewater treatment, preliminary and primary processes will remove approximately 25 percent of the organic load and virtually all of the nonorganic solids. Wastewater discharges from industrial and commercial sources may contain pollutants at levels that could affect the quality of receiving waters or interfere with publicly owned treatment works (POTWs) that receive those discharges. Wastewater Treatment Systems The Need for Wastewa-ter Treatment Wastewater treatment is needed so that we can industrial wastewater treatment pdf use our rivers and streams for fishing, swimming and drinking water. Other discharge requirements may be more stringent. These include safety and emergency preparedness, regulations compliance, laboratory analysis, operation and maintenance of various equipment units that may be part of a wastewater treatment operation, and treatment processes and technologies that are seen throughout industry. An industrial wastewater is discharged into a municipal wastewater sewer. Industrial wastewater treatment covers the mechanisms and processes used to treat waters that have been contaminated in some way by anthropogenic industrial or commercial activities prior to its release into industrial wastewater treatment pdf the environment or industrial wastewater treatment pdf its reuse. Industrial wastewater treatment covers the mechanisms and processes used to treat waters that have been contaminated in some way by anthropogenic industrial or commercial activities prior to its release into the environment or its re-use. SS mg/L CH4 m3/d Wastewater weekly average. Industrial Wastewater Treatment by PATWARDHAN, A. pdf from ENVIRONMEN 77 at Universiti Putra Malaysia. Industrial Wastewater Treatment: A Guidebook presents an approach to successful selection, development, implementation, and operation of industrial wastewater treatment systems for facilities of all sizes. Industrial wastewater that discharges to domestic wastewater treatment facilities, however, is regulated under the Industrial Pretreatment component of the department’s Domestic Wastewater. The reader is referred to industrial wastewater treatment pdf the existence of 40 CFR Part 257, Subparts A and B, which provide federal requirements for non-hazardous industrial waste facilities industrial wastewater treatment pdf or practices. of Food WW Treatment (21 Wastewater Anaerobic DFA Effluent reactor Wastewater Date WW Inf. ate the differences and similarities between sewage treatment plants (STPs) and industrial wastewater treatment plants (IWTPs). 1 Treatment and discharge systems can sharply differ between countries. In industrial waste treatment, preliminary or pri-. 2 legislation 1 1. The Biological Wastewater Treatment series is based on the book Biological Wastewater Treatment in Warm Climate Regions and on a highly acclaimed set of best selling textbooks. ) and flow rates. There are two basic stages in the treat-ment of wastes, primary and secondary, which are outlined here. industrial wastewater require that industrial wastewater treatment pdf communities give nature a helping hand. From: Advances in Food and Nutrition Research,. View Exercise Chapter 3 Water Treatment. Primary treatment generally refers to a sedimentation process ahead of the main system or secondary treatment. Also, treatment and. Like the various characteristics of industrial wastewater, the treatment of industrial wastewater must be designed specifically for the particular type of effluent produced. Solutions and Mixtures. topics of concern industrial wastewater treatment pdf to the industrial wastewater treatment plant operator. The Tougas wastewater treatment plant. This objective doesn&39;t only raise the standard of living. Motivating factors for industrial wastewater industrial wastewater treatment pdf treatment beyond compliance. Due to industrial wastewater often contains industrial wastewater treatment pdf a variety of toxic substances. Equalization (EQ) is a means of buffering or equalizing the characteristics of wastewater prior to entering the wastewater treatment system. It industrial wastewater treatment pdf is important to remove oil emulsions before biological treatment. Industrial Waste Water Treatment Notes pdf – IWWT notes pdf Details Unit-3 Sludge Lagooning,Removal of Onganic Dissolved Solid,Lagooning,Phosphorus Removal,Phosphorous in wastewater,Orthophosphates,Enhanced biological phosphorus removal,Physical Treatment,Membrane technologies. It is very necessary to do the 7 basic principles of industrial wastewater treatment to minimize the hazard. Environmental Protection Agency to issue permits for discharge to surface waters under the National Pollutant. ate the differences and similarities between sewage treatment plants (STPs) and industrial wastewater treatment plants (IWTPs). Biological treatment normally breaks oil emulsions to form free oil in biological treatment systems. This course is designed to train operators to safely and effectively operate industrial waste treatment plants. The Guide for Industrial Waste Managementaddresses non-hazardous industrial waste subject to Subtitle D of the Resource Conservation and Recovery Act (RCRA). Industrial wastewater is the aqueous discard that results from substances having been dissolved or suspended in water, typically during the use of water in an industrial manufacturing process or the cleaning activities that take place along industrial wastewater treatment pdf with that process. 3 laboratory accreditation 7. 3 overview of waste water treatment 2 1. general considerations for the industrial wastewater treatment pdf treatment of waste water 2. LECTURE NOTES ON INDUSTRIAL WASTEWATER TREATMENT INSTITUTE OF AERONAUTICAL ENGINEERING. STATION D&39;ÉPURATION Number of assessments Number of assess-ments revealing compliance Compliance rate Wastewater treatment facility compliance (%) Wastewater treatment. Managing wastewater is a necessary task for small businesses and production facilities, as well as for large industrial firms. Woodard & Curran, Inc. , Industrial Wastewater Treatment Books available in PDF, EPUB, Mobi Format. And the purification measures should be done before discharge. | Find, read and cite all the research you. , in Industrial Waste Treatment Handbook (Second Edition),. treatment of waste water 1 1. After treatment, the treated industrial pdf wastewater (or effluent) may be reused or released to a sanitary industrial wastewater treatment pdf sewer or to a surface water in the environment. Industrial Wastewater industrial wastewater treatment pdf Treatment. that is designed specifically for the industrial wastewater treatment pdf particular wastewater. Generally, industrial wastewater can be divided into two types: inorganic industrial wastewater and organic industrial wastewater. Compliance with tightening federal regulations for wastewater treatment, handling and disposal, such as the Clean Water Act, the Resource Conservation and Recovery Act, and the Safe Drinking Water Act, requires plant management industrial wastewater treatment pdf to be focused on the wastewater issue. 4 role of the plant operator 2 2. Typically would like to see oil < 30 ppm to a biological treatment process. 1 Inorganic industrial wastewater. Wastewater Management Wastewater Treatment is one of the most important services a municipality may provide and one of the least visible. In the primary stage, solids industrial wastewater treatment pdf are. Chapter 4 — The Industrial Wastewater industrial wastewater treatment pdf Treatment Plant — Preliminary Unit Processes 42 Discussion on the preliminary treatment required to prepare industrial wastew-aters for secondary treatment. pdf from EE JY56 at University of Malaysia, Pahang. -> Pdf to jpg converter android 4+ rating -> Preschool english worksheets pdf
In order to gauge how your business is doing, you'll need more than single numbers extracted from the financial statements. And you'll need to view each number in the context of the whole picture. For example, your income statement may show a net profit of $100,000. But is this good? If this profit is earned on sales of $500,000, it may be very good. But if sales of $2,000,000 are required to produce the net profit of $100,000, the picture changes drastically. A $2,000,000 sales figure may seem impressive, but not if it takes $1,900,000 in assets to produce those sales. The true meaning of figures from the financial statements emerges only when they are compared to other figures. Such comparisons are the essence of why business and financial ratios have been developed. Working with the Most Important Ratios Various ratios can be established from key figures on the financial statements. These ratios are very simple to calculate—sometimes they are simply expressed in the format "x:y," and other times they are simply one number divided by another, with the answer expressed as a percentage. Your accounting software may be able to produce them in a few clicks, or you can always export the data to a spreadsheet to let technology do most of the work. These simple ratios can be a powerful tool because they allow you to immediately grasp the relationship expressed. When you routinely calculate and record a group of ratios at the end of every accounting period, you can assess the performance of your business over time, and compare your business to others in the same industry or to others of a similar size. By doing so, you won't be alone — banks routinely use business ratios to evaluate a business that's applying for a loan, and some creditors use them to determine whether to extend credit to you. When you compare changes in your business's ratios from period to period, you can pinpoint improvements in performance or developing problem areas. By comparing your ratios to those in other businesses, you can see possibilities for improvement in key areas. A number of sources, including many There are dozens and dozens of financial ratios that you can look at, but many will have little or no meaning for your business. In the following - Liquidity Ratios - Profitability Ratios - Solvency Ratios The liquidity ratio is generally the best place to start. Understanding Liquidity Ratios These ratios are probably the most commonly used of all the business ratios. Your creditors may often be particularly interested in these because they show the ability of your business to quickly generate the cash needed to pay your bills. This information should also be highly interesting to Liquidity ratios are sometimes called working capital ratios because that, in essence, is what they measure. The liquidity ratios comprise: - The Current Ratio - The Quick Ratio Liquidity ratios are commonly examined by banks when they are evaluating a loan application. Once you get the loan, your lender may also require that you continue to maintain a certain minimum ratio, as part of the loan agreement. For that reason, steps to improve your liquidity ratios are sometimes necessary. That's why you'll need to familiarize yourself with both components of the liquidity ration. Figuring Your Current Ratio This ratio provides a way of looking at your working capital and measuring your short-term solvency. The current ratio is in the format x:y, where x is the amount of all current assets and y is the amount of all current liabilities. Generally, your current ratio shows the ability of your business to generate cash to meet its short-term obligations. A decline in this ratio can be attributable to an increase in short-term debt, a decrease in current assets, or a combination of both. Regardless of the reasons, a decline in this ratio means a reduced ability to generate cash. If you're looking to secure money via the sale of some stock through an initial public offering, many State Securities Bureaus will require that you have a current ratio of 2:1 or better. Merely paying off some current liabilities can improve your current ratio. If your business's current assets total $60,000 (including $30,000 cash) and your current liabilities total $30,000, the current ratio is 2:1. Using half your cash to pay off half the current debt just prior to the balance sheet date improves this ratio to 3:1 ($45,000 current assets to $15,000 current liabilities). If your business lacks the cash to reduce current debts, long-term borrowing to repay the short-term debt can also improve this ratio. If your current assets total $50,000 and your current liabilities total $40,000, the poor 5:4 current ratio changes to a better 2:1 ratio if $15,000 of long-term debt is used to refinance an equal amount of short-term debt (you'll now have $50,000 in current assets to $25,000 in current liabilities). Other possibilities may reveal themselves if you carefully scrutinize the elements in the current asset and current liability sections of your company's balance sheet. The idea is simply to take steps to increase total current assets and/or decrease total current liabilities as of the balance sheet date. For example: - Can you place a higher value on your year-end inventory? - Can pending orders be invoiced and placed on your books sooner to increase your accounts receivable? - Can purchases be delayed to reduce accounts payable? Figuring Your Quick Ratio The quick ratio, also known as the "acid test," serves a function that is quite similar to that of the current ratio. The difference between the two is that the quick ratio subtracts inventory from current assets and compares the resulting figure (also called the quick current assets) to current liabilities. Why? Inventory can be turned to cash only through sales, so the quick ratio gives you a better picture of your ability to meet your short-term obligations, regardless of your sales levels. Over time, a stable current ratio with a declining quick ratio may indicate that you've built up too much inventory. If your quick current assets are $90,000 and your current liabilities are $30,000, your acid test ratio would be 3:1 (90,000:30,000). How to Improve Your Quick Ratio Because this ratio is quite similar to the current In evaluating the current ratio and the quick ratio, you should keep in mind that they give only a general picture of your business's ability to meet short-term obligations. They are not an indication of whether each specific obligation can be paid when due. To determine payment probability, you may want to construct a cash flow budget. In general, a quick or acid-test ratio of at least 1:1 is good. That signals that your quick current assets can cover your current liabilities. Example of a Typical Income Statement This typical income statement showing three years' information should demonstrate the value in an income statement. |Smith Manufacturing Company| Years Ended December 31, 201Z, 201Y, and 201X |Sales||$ X||$ X||$ X| |(Sales returns and allowances)||($ X)||($ X)||($ X)| |Net sales||$ X||$ X||$ X| |Cost of goods sold| |Beginning inventory||$ X||$ X||$ X| |Cost of goods purchased||$ X||$ X||$ X| |(Ending inventory)||($ X)||($ X)||($ X)| |Cost of goods sold||$ X||$ X||$ X| |Gross profit||$ X||$ X||$ X| |Selling expense||($ X)||($ X)||($ X)| |General and administrative expense||($ X)||($ X)||($ X)| |Total operating expenses||$ X||$ X||$ X| |Income from operations||$ X||$ X||$ X| |Interest expense||$ X||$ X||$ X| |Pretax income||$ X||$ X||$ X| |Income taxes||($ X)||($ X)||($ X)| |Net income||($ X)||($ X)||($ X)| |(The notes are an integral part of this statement.)| Using the Income Statement As with the balance sheet, an in-depth knowledge of accounting is not necessary for you to make good use of the income statement data. For example, you can use your income statement to determine sales trends. Are sales going up or down, or are they holding steady? If they're going up, are they going up at the rate you want or expect? Also, if you sell goods, you can use the income statement to monitor quality control. Look at your sales returns and allowances. If that number is rising, it may indicate that you have a problem with product quality. Gross profit margin should be closely monitored to make sure that your business is operating at the same profitability levels as it grows. To find this margin, divide your gross profit (sales minus cost of goods sold) by your sales for each of the years covered by the income statement. If the percentage is going down, it may indicate that you need to try to raise prices. Also, check out your selling expense. It should increase only in proportion to increases in sales. Disproportionate increases in selling expense should be followed up and corrected. General and administrative expenses should also be closely watched. Increases in this area may mean that the company is getting too bureaucratic and is in line for some cost-cutting measures, or that equipment maintenance is too expensive and new equipment should be considered. Interest expense is an important measure of how your company is doing. If your interest expense is increasing rapidly as a percentage of sales or net income, you may be in the process of becoming overburdened with debt. Creating a Statement of Changes in Position The statement of changes in financial position provides data not explicitly present in the balance sheet or the income statement. This statement helps to explain how your company acquired its money and how it was spent. This statement can also help to identify financing needs, to identify cash drains, and to identify holes in the cash budgeting process. Use the statement of changes in financial position as a tool to analyze cash inflows and outflows. Also, use it as a starting point to forecast future cash flows and financing requirements. Accounting standards give preparers of this statement quite a bit of flexibility in how they arrange and format the information. However, the Financial Accounting Standards Board has stated its intention that this statement should evolve into one whose focus is on cash and changes in cash. This position has been strongly endorsed by the Financial Executives Institute (FEI). As might be expected, more and more companies are using a cash focus for the statement of changes in financial position. In fact, the statement is often called the "Sources and Uses of Cash Statement."
Throughput accounting (TA) is a principle-based and simplified management accounting approach that provides managers with decision support information for enterprise profitability improvement. TA is relatively new in management accounting. It is an approach that identifies factors that limit an organization from reaching its goal, and then focuses on simple measures that drive behavior in key areas towards reaching organizational goals. TA was proposed by Eliyahu M. Goldratt as an alternative to traditional cost accounting. As such, Throughput Accounting is neither cost accounting nor costing because it is cash focused and does not allocate all costs (variable and fixed expenses, including overheads) to products and services sold or provided by an enterprise. Considering the laws of variation, only costs that vary totally with units of output (see definition of T below for TVC) e.g. raw materials, are allocated to products and services which are deducted from sales to determine Throughput. Throughput Accounting is a management accounting technique used as the performance measure in the Theory of Constraints (TOC). It is the business intelligence used for maximizing profits, however, unlike cost accounting that primarily focuses on 'cutting costs' and reducing expenses to make a profit, Throughput Accounting primarily focuses on generating more throughput. Conceptually, Throughput Accounting seeks to increase the speed or rate at which throughput (see definition of T below) is generated by products and services with respect to an organization's constraint, whether the constraint is internal or external to the organization. Throughput Accounting is the only management accounting methodology that considers constraints as factors limiting the performance of organizations. Management accounting is an organization's internal set of techniques and methods used to maximize shareholder wealth. Throughput Accounting is thus part of the management accountants' toolkit, ensuring efficiency where it matters as well as the overall effectiveness of the organization. It is an internal reporting tool. Outside or external parties to a business depend on accounting reports prepared by financial (public) accountants who apply Generally Accepted Accounting Principles (GAAP) issued by the Financial Accounting Standards Board (FASB) and enforced by the U.S. Securities and Exchange Commission (SEC) and other local and international regulatory agencies and bodies such as International Financial Reporting Standards (IFRS). Throughput Accounting improves profit performance with better management decisions by using measurements that more closely reflect the effect of decisions on three critical monetary variables (throughput, investment (AKA inventory), and operating expense — defined below). When cost accounting was developed in the 1890s, labor was the largest fraction of product cost and could be considered a variable cost. Workers often did not know how many hours they would work in a week when they reported on Monday morning because time-keeping systems were rudimentary. Cost accountants, therefore, concentrated on how efficiently managers used labor since it was their most important variable resource. Now however, workers who come to work on Monday morning almost always work 40 hours or more; their cost is fixed rather than variable. However, today, many managers are still evaluated on their labor efficiencies, and many "downsizing," "rightsizing," and other labor reduction campaigns are based on them. Goldratt argues that, under current conditions, labor efficiencies lead to decisions that harm rather than help organizations. Throughput Accounting, therefore, removes standard cost accounting's reliance on efficiencies in general, and labor efficiency in particular, from management practice. Many cost and financial accountants agree with Goldratt's critique, but they have not agreed on a replacement of their own and there is enormous inertia in the installed base of people trained to work with existing practices. The concepts of Throughput Accounting Goldratt's alternative begins with the idea that each organization has a goal and that better decisions increase its value. The goal for a profit maximizing firm is stated as, increasing net profit now and in the future. Profit maximization seen from a Throughput Accounting viewpoint, is about maximizing a system's profit mix without Cost Accounting's traditional allocation of total costs. Throughput Accounting actions include obtaining the maximum net profit in the minimum time period, given limited resource capacities and capabilities. These resources include machines, capital (own or borrowed), people, processes, technology, time, materials, markets, etc. Throughput Accounting applies to not-for-profit organizations too, where they develop their goal that makes sense in their individual cases, and these goals are commonly measured in goal units. Throughput Accounting also pays particular attention to the concept of 'bottleneck' (referred to as constraint in the Theory of Constraints) in the manufacturing or servicing processes. Throughput Accounting uses three measures of income and expense: - Throughput (T) is the rate at which the system produces "goal units." When the goal units are money (in for-profit businesses), throughput is net sales (S) less totally variable cost (TVC), generally the cost of the raw materials (T = S – TVC). Note that T only exists when there is a sale of the product or service. Producing materials that sit in a warehouse does not form part of throughput but rather investment. ("Throughput" is sometimes referred to as "throughput contribution" and has similarities to the concept of "contribution" in marginal costing which is sales revenues less "variable" costs – "variable" being defined according to the marginal costing philosophy.) - Investment (I) is the money tied up in the system. This is money associated with inventory, machinery, buildings, and other assets and liabilities. In earlier Theory of Constraints (TOC) documentation, the "I" was interchanged between "inventory" and "investment." The preferred term is now only "investment." Note that TOC recommends inventory be valued strictly on totally variable cost associated with creating the inventory, not with additional cost allocations from overhead. - Operating expense (OE) is the money the system spends in generating "goal units." For physical products, OE is all expenses except the cost of the raw materials. OE includes maintenance, utilities, rent, taxes and payroll. Organizations that wish to increase their attainment of The Goal should therefore require managers to test proposed decisions against three questions. Will the proposed change: - Increase throughput? How? - Reduce investment (inventory) (money that cannot be used)? How? - Reduce operating expense? How? The answers to these questions determine the effect of proposed changes on system wide measurements: - Net profit (NP) = throughput – operating expense = T – OE - Return on investment (ROI) = net profit / investment = NP/I - TA Productivity = throughput / operating expense = T/OE - Investment turns (IT) = throughput / investment = T/I These relationships between financial ratios as illustrated by Goldratt are very similar to a set of relationships defined by DuPont and General Motors financial executive Donaldson Brown about 1920. Brown did not advocate changes in management accounting methods, but instead used the ratios to evaluate traditional financial accounting data. For example: The railway coach company was offered a contract to make 15 open-topped streetcars each month, using a design that included ornate brass foundry work, but very little of the metalwork needed to produce a covered rail coach. The buyer offered to pay $280 per streetcar. The company had a firm order for 40 rail coaches each month for $350 per unit. - The cost accountant determined that the cost of operating the foundry vs. the metalwork shop each month was as follows: |Overhead Cost by Department||Total Cost ($)||Hours Available per month||Cost per hour ($)| - The company was at full capacity making 40 rail coaches each month. And since the foundry was expensive to operate, and purchasing brass as a raw material for the streetcars was expensive, the accountant determined that the company would lose money on any streetcars it built. He showed an analysis of the estimated product costs based on standard cost accounting and recommended that the company decline to build any streetcars. |Standard Cost Accounting Analysis||Streetcars||Rail coach| |Foundry Time (hrs)||3.0||2.0| |Metalwork Time (hrs)||1.5||4.0| |Raw Material Cost||$120.00||$60.00| |Profit per Unit||$ (7.81)||$116.25| - However, the company's operations manager knew that recent investment in automated foundry equipment had created idle time for workers in that department. The constraint on production of the railcoaches was the metalwork shop. She made an analysis of profit and loss if the company took the contract using throughput accounting to determine the profitability of products by calculating "throughput" (revenue less variable cost) in the metal shop. |Throughput Cost Accounting Analysis||Decline Contract||Take Contract| |Metal shop Hours||160||159| |Coach Raw Material Cost||$(2,400)||$(2,040)| |Streetcar Raw Material Cost||$0||$(1,800)| - After the presentations from the company accountant and the operations manager, the president understood that the metal shop capacity was limiting the company's profitability. The company could make only 40 rail coaches per month. But by taking the contract for the streetcars, the company could make nearly all the railway coaches ordered, and also meet all the demand for streetcars. The result would increase throughput in the metal shop from $6.25 to $10.38 per hour of available time, and increase profitability by 66 percent. One of the most important aspects of Throughput Accounting is the relevance of the information it produces. Throughput Accounting reports what currently happens in business functions such as operations, distribution and marketing. It does not rely solely on GAAP's financial accounting reports (that still need to be verified by external auditors) and is thus relevant to current decisions made by management that affect the business now and in the future. Throughput Accounting is used in Critical Chain Project Management (CCPM), Drum Buffer Rope (DBR)—in businesses that are internally constrained, in Simplified Drum Buffer Rope (S-DBR) —in businesses that are externally constrained (particularly where the lack of customer orders denotes a market constraint), as well as in strategy, planning and tactics, etc. - Eliyahu M. Goldratt and Jeff Cox - The Goal - ISBN 0-620-33597-1. - Thomas Corbett - Throughput Accounting - ISBN 0-88427-158-7. - Etienne Du Plooy - Throughput Accounting Techniques - ISBN 978-0-9946979-0-5. - Eric Noreen - Theory of Constraints and its Implications for Management Accounting - ISBN 978-0-88427-116-1. - John A. Caspari and Pamela Caspari - Management Dynamics - ISBN 0-471-67231-9. - Eliyahu M. Goldratt - The Haystack Syndrome (pp 19) - ISBN 0-88427-089-0. - Performance management, Paper f5. Kaplan publishing UK. Pg 17 - Performance management, Paper f5. Kaplan publishing UK. Pg 17 - Eliyahu M. Goldratt - Critical Chain - ISBN 0-620-21256-X. - Eli Schragenheim and H William Dettmer - Manufacturing at Warp Speed - ISBN 1-57444-293-7
Present Transportation of Ni The same unit of nickel is transported up to 4x, making it a very intensive carbon footprint. Typically, nickel goes from the mine to the smelter, from the smelter to the refinery, from the refinery to the nickel sulphate producer and from the nickel sulphate producer to the battery precursor producer. The stainless-steel supply chain is very old and entrenched. For the most part, the sunk capital investment needs to be utilized for nickel producers to cover their fixed costs. The production of nickel sulphates and precursor is therefore an extension of an existing supply chain that was developed for a different product, namely nickel for stainless steel, which makes it very inefficient. This is an opportunity for the Tamarack Project: We have a clean nickel concentrate even compared to many other high-grade nickel sulphide projects. We have therefore leached our concentrates with over 99.5% recoveries into a liquor. The next step is to extract the nickel into a sulphate using a hydromet process, which is very similar to what is used for nickel matte. Once we have produced a nickel sulphate, we will give consideration to potentially producing a precursor product.
Supply chain energy efficiency critical to reducing carbon footprint August 07, 2013 - MMH Editorial Companies that want to reduce their carbon footprint need to pay attention to the energy they use as well as the energy used by links in their supply chains, according to a new report. The University of Minnesota Institute on the Environment’s NorthStar Initiative for Sustainable Enterprise, along with the Environmental Defense Fund, provide suggestions on why and how to reduce energy consumption in a new report, Supply Chain Energy Efficiency: Engaging Small & Medium Entities in Global Production Systems. Based on a two-day workshop tapping the brains of 31 representatives of energy service companies, financers, retailers, nongovernmental organizations, government and academia from around the world, the report provides a look into thinking about industrial energy efficiency within the system of a supply chain, and highlights opportunities for corresponding cost-, reputation- and energy-saving improvements. “The industrial sector consumes nearly one-third of all global primary energy and the opportunities for improving energy efficiency in the industrial sector are vast,” said symposium organizer and researcher Jennifer Schmitt. To realize these opportunities we must manage energy across organizations, industry sectors, supply chains and regions, which will require significant new and increasingly more transparent data, common metrics and analytics. Public and private collaboration will be crucial to reduce the transaction costs of implementing supply chain energy efficiency, particularly with regard to credit enhancement, technology provider accreditation and governmental policies. The report highlights four recommendations coming out of the symposium that span across the many actors involved in saving a kilowatt hour: 1. Engage leading companies to identify high-quality suppliers for pilot supply chain energy efficiency improvements. 2. Create one or more sector-based collaborations for improving supply chain energy efficiency by assembling groups of peer manufacturers within a supply chain and using benchmarking, process capability analysis and best practice sharing to identify and improve energy efficiency and industry competitiveness. 3. Increase transparency and standardization of energy use, audits and supply chain information. 4. Create finance and credit risk approaches and models for portfolio-level energy efficiency and energy management projects. “These recommendations, coming out of our discussions at the symposium, provide an unprecedented ability to characterize and benchmark sector-level and facility-level energy savings opportunities, share knowledge in ways that allow for the flexible application of technological and organizational information in a supply chain environment, and coordinate resources across regions and across public and private actors,” Schmitt said. “Approaching energy efficiency through the supply chain holds great potential for both carbon and financial savings.” Click here to view and download a copy of the report. Subscribe to Logistics Management magazine entire logistics operation. Start your FREE subscription today!
Updated: Dec 5, 2019 By Faith Okoko— Good grades, we all want them but are we doing enough to get them and is our enough efficient? The Business Leadership Club invited Dr. Lonna Murphy, a psychology professor at Passaic County Community College who has done extensive research about memory, to provide insights about studying. The event was “How to Study Smarter!” and it took place on Thursday, November 14th at 1:30 pm in the Hamilton Club building. In attendance, were Professor Cox the chair of the business department, Club advisor Professor Khloud Kourani, Business Leadership Club members and other students. The tips she provided will be helpful given finals are around the corner. One thing to remember is that human memories are distorted and inaccurate because we still have hunter gatherer brains. What is the first step to getting good grades? Apparently, going to class tops the list and there are a couple of reasons why. It helps with knowing the material that is important to the instructor. Moreover, most teachers simplify information in class and make it palatable. There is also something called context affect, which establishes that the environment impacts our memory of an event so that learning something in a classroom setting may help the brain remember. It is the reason why people sometimes forget what they have heard when they walk through a door. Once in class, it is important to engage. Readiness to listen, asking questions and making connections can all help in the retrieval of material later. Professor Murphy explained that the more that is done in class, the less that will have to be done at home. Picking classes can also influence grades. She remarked that a diverse course schedule can help with focus and taking two challenging and similar classes together, such as biology and chemistry, is not a wise decision. She also revealed, it is impossible to focus for more than two hours in class. In addition, she talked about note taking. Thinking about what the notes are for, including memory cues for later, writing in your own words and deep processing all help. Highlighting is counterproductive. It is important to pay attention when taking notes and ask the questions, why is the teacher saying this and what does this mean? Also, taking the words of the instructor seriously is paramount because if he is talking about something, it must be important. What about actual studying? She made known her hatred for flashcards because oftentimes they focus on definitions and do not involve critical thinking. According to Professor Murphy, it is important to understand and not just memorize because an education is more than good grades alone. She also made it clear that human beings are not multitaskers and listening to music while studying is not a good idea. Furthermore, those who start studying early and do it in small chunks, for example thirty minutes a day, instead of cramming one day to the exam fare better because they get to interact with the material longer. Studying smarter is bound to be helpful because the countdown to finals and the spring semester has begun. For more information about the Business Leadership Club – BLC, contact club advisor Professor Kourani at [email protected]
Titanium Corporation Receives Grant To Research Extraction of Hydrocarbons and Minerals from Oil Sands Tailings 30 March 2008 |Without recovery, oil sands production is projected to lose 30 million barrels per year of bitumen and naptha by 2015. Click to enlarge.| Alberta Energy has awarded Titanium Corporation a C$3.5-million grant to research the value-added opportunities and environmental benefits of stripping out hydrocarbons and heavy minerals from oil sands tailings streams. Funding for this two-year project is being provided through Alberta’s C$200-million Energy Innovation Fund. Titanium Corporation is a Canadian company that is developing a commercial process to maximize the value existing in waste material presently being deposited in Alberta’s oil sands tailings. Rather than channeling mine froth tailings into disposal areas, the mineral-rich stream is sent to a separation plant via pipeline where bitumen, titanium minerals, zircon and naphtha are to be recovered for commercial use. (Earlier post.) Not only can this research result in processing industrial waste into beneficial products, but it has the potential to significantly reduce emissions and improve the environment by extracting bitumen from tailings rather than from mining.—Energy Minister Mel Knight The heavy minerals contained in the oil sands deposits are concentrated by the bitumen extraction/recovery process (en route to oil production). The majority of these minerals are contained in the oil sands froth treatment plant (FTP) tailings stream. Titanium Corporation’s process will intercept FTP tailings near their discharge into the tailings pond. Two processing facilities will then treat the material recovered. A Primary Concentrator Plant will produce a Heavy Mineral Concentrate. This concentrate will then be separated in a Mineral Separation Plant into final products: ilmenite, leucoxene and zircon. More than 90% of the world’s titanium minerals are sold to the pigment industry, which manufactures products for the paint, coating, paper and plastics industries. Another important use of titanium is in making alloys. Zircon sand is in high demand worldwide and is used by the ceramic, refractory and chemical industries. Naphtha, a liquid hydrocarbon, may also be recovered through the research project and reused for processing bitumen prior to upgrading. TrackBack URL for this entry: Listed below are links to weblogs that reference Titanium Corporation Receives Grant To Research Extraction of Hydrocarbons and Minerals from Oil Sands Tailings:
AITKEN (Aitkin), ALEXANDER, surveyor; probably b. at Berwick-upon-Tweed, England, son of David Aitken and his wife, who may have been named Catherine; d. 1799 at Kingston, Upper Canada. Raised in northern England and in Scotland, Alexander Aitken was trained as a surveyor, probably by his father. The date of his arrival in Canada is not known. Late in 1784 he was made a deputy surveyor at Cataraqui (Kingston, Ont.); his territory comprised the north shore of Lake Ontario. When what is now southern Ontario was divided into four districts in 1788, Aitken stayed on as deputy surveyor at Kingston, the district town of the new Mecklenburg (after 1792 Midland) District, but he continued to work in that part of the Nassau (Home) District which had been his responsibility before 1788. In 1792 he was transferred to the surveyor general’s office, created that year for the new province of Upper Canada, but the nature of his duties did not change. He worked with the land board of the Mecklenburg District from its creation in 1788 until its abolition in 1794. As deputy surveyor Aitken’s duties included continuing the actual surveys of his area, usually a concession or two at a time, establishing township boundaries, drawing plans for the government, and assigning lots, principally to loyalists in the early years. The territory Aitken surveyed began at the western end of what is now Leeds County and included the present Frontenac, Lennox and Addington, and Hastings counties, basically in the first two rows of townships from the waterfront. John Collins had already surveyed the Kingston town plot but Aitken resurveyed parts of it, laid out the town extensions, and in 1790 made the surveys of Point Frederick. He also surveyed much of Prince Edward County and the islands east of it. In the Home District to the west Aitken surveyed the first concessions of Murray Township in Northumberland County, the Presqu’ile peninsula, and the town plot for Newcastle. His most important work to the west, however, concerned the plans for York (Toronto). In 1788 he had prepared the first plan for Lord Dorchester [Carleton*], and when John Graves Simcoe* was appointed lieutenant governor of the new province, Aitken accompanied him on his expedition north to what is now Lake Simcoe, the shores of which he then surveyed. In 1793 he prepared a new town plan of York and surveyed the shores of Burlington Bay (Hamilton harbour) and the start of Dundas Street to the west. The following year saw him doing further work along Yonge Street, north of York, and at Penetanguishene harbour. Much of his surveying was extremely frustrating. Basic equipment was frequently unavailable, pay for the crew was slow in coming, and settlers were dissatisfied with their locations. Poor farmland was a problem. Hungerford Township, for example, was all rock and swamp and he was afraid to offer it to anyone. Inaccuracy, or claims of inaccuracy, in surveys also caused problems. Aitken had to investigate claims that his predecessors had erred in the survey of Fredericksburgh (North and South Fredeidcksburgh) Township, and in 1797 he had to recommend the resurvey of Richmond Township. Peter Russell* claimed that Aitken and Augustus Jones* had made errors in the town plan of York. When Aitken was dying, however, Chief Justice John Elmsley* attested to his general competence, commenting that his death would be “a severe misfortune” for the public service. Little is known of Aitken’s personal life. Though not highly paid, like all surveyors he received a number of land grants; he obtained 1,500 rural acres and a town plot in Kingston. When in that city he attended St George’s Church, to which he made various donations. The constant movement necessary to his work seems to have left little time for other interests. The conditions of his work were hardly conducive to good health, and he complained of “intermitting fever,” possibly malaria; he hurt his chest in a fall from a carriole and by 1797 was suffering from tuberculosis. His burial in St George’s cemetery (now St Paul’s churchyard) took place on 1 Jan. 1800. He had never married and his land holdings passed to his father in Scotland. PAO, U. C., Lieutenant Governor’s Office, letterbook, 1799–1800, John Elmsley to Peter Hunter, 12 Nov. 1799; RG 1, A-I-1, 1–3; A-I-6, 1–2, 30; A-II-1, 1; C-I-4, 40; CB-1, 9–11. Queen’s University Archives (Kingston, Ont.), Hon. Richard Cartwright papers, account book, 1791–98; [E. E. Horsey], “Cataraqui, Fort Frontenac, Kingstown, Kingston” (typescript, 1937). Correspondence of Lieut. Governor Simcoe (Cruikshank), II, 30, 71, 99, 111; III, 178, 263; V, 13, 14, 121, 163, 202, 237ff. The correspondence of the Honourable Peter Russell, with allied documents relating to his administration of the government of Upper Canada . . . , ed. E. A. Cruikshank and A. F. Hunter (3v., Toronto, 1932–36), I, 53, 65, 169–70, 226. Kingston before War of 1812 (Preston), 107, 125, 130, 296. PAO Report, 1905, 310, 385, 389, 426, 458, 461–62, 466–68, 472, 495, 507. Quebec Gazette, 10 July 1788. The town of York, 1793–1815; a collection of documents of early Toronto, ed. E. G. Firth (Toronto, 1962), xxxii, xxxvi, 11, 14, 23, 37. F. M. L. Thompson, Chartered surveyors, the growth of a profession (London, 1968). D. W. Thomson, Men and meridians: the history of surveying and mapping in Canada (3v., Ottawa, 1966–69), I, 225–26, 231. “Alexander Aitken,” Assoc. of Ont. Land Surveyors, Annual report (Toronto), 47 (1932), 100. Willis Chipman, “The life and times of Major Samuel Holland, surveyor-general, 1764–1801,” OH, XXI (1924), 55–57.
- Uranium mining Uranium mining is the process of extraction of uranium ore from the ground. As uranium ore is mostly present at relatively low concentrations, most uranium mining is very volume-intensive, and thus tends to be undertaken as open-pit mining. It is also undertaken in only a small number of countries of the world, partly because sufficiently high uranium concentrations to motivate mining at current prices are rare. The worldwide production of uranium in 2009 amounted to 50,572 tonnes, of which 27% was mined in Kazakhstan. Kazakhstan, Canada, and Australia are the top three producers and together account for 63% of world uranium production. Other important uranium producing countries in excess of 1000 tonnes per year are Namibia, Russia, Niger, Uzbekistan, and the United States. A prominent use of uranium from mining is as fuel for nuclear power plants. As of 2008, known uranium ore resources that can be mined at about current costs are estimated to be sufficient to produce fuel for about a century, based on current consumption rates. After mining uranium ores, they are normally processed by grinding the ore materials to a uniform particle size and then treating the ore to extract the uranium by chemical leaching. The milling process commonly yields dry powder-form material consisting of natural uranium, "yellowcake," which is sold on the uranium market as U3O8. Uranium minerals were noticed by miners for a long time prior to the discovery of uranium in 1789. The uranium mineral pitchblende, also known as uraninite, was reported from the Erzgebirge (Ore Mountains), Saxony, as early as 1565. Other early reports of pitchblende date from 1727 in Joachimsthal and 1763 in Schwarzwald. In the early 19th century, uranium ore was recovered as a byproduct of mining in Saxony, Bohemia, and Cornwall. The first deliberate mining of radioactive ores took place in Jáchymov, also known by its German name Joachimsthal, a silver-mining city in what is now the Czech Republic. Marie Curie used pitchblende ore from Jáchymov to isolate the element radium, a decay product of uranium; her death was from aplastic anemia, almost certainly due to exposure to radioactivity. Until World War II uranium mining was done primarily for the radium content. Sources for radium, contained in the uranium ore, were sought for use as luminous paint for watch dials and other instruments, as well as for health-related applications, some of which in retrospect might have been harmful. The byproduct uranium was used mostly as a yellow pigment. In the United States, the first radium/uranium ore was discovered in 1871 in gold mines near Central City, Colorado. This district produced about 50 tons of high grade ore between 1871 and 1895. However, most American uranium ore before World War II came from vanadium deposits on the Colorado Plateau of Utah and Colorado. In Cornwall, the South Terras Mine near St. Stephen opened for uranium production in 1873, and produced about 175 tons of ore before 1900. Other early uranium mining occurred in Autunois in France's Massif Central, Oberpfalz in Bavaria, and Billingen in Sweden. The Shinkolobwe deposit in Katanga, Belgian Congo now Shaba Province, Democratic Republic of the Congo (DRC) was discovered in 1913, and exploited by the Union Minière du Haut Katanga. Other important early deposits include Port Radium, near Great Bear Lake, Canada discovered in 1931, along with Beira Province, Portugal; Tyuya Muyun, Uzbekistan, and Radium Hill, Australia. Because of the need for the uranium for bomb research during World War II, the Manhattan Project used a variety of sources for the element. The Manhattan Project initially purchased uranium ore from the Belgian Congo, through the Union Minière du Haut Katanga. Later the project contracted with vanadium mining companies in the American Southwest. Purchases were also made from the Eldorado Mining and Refining Limited company in Canada. This company had large stocks of uranium as waste from its radium refining activities. American uranium ores mined in Colorado were mixed ores of vanadium and uranium, but because of wartime secrecy, the Manhattan Project would publicly admit only to purchasing the vanadium, and did not pay the uranium miners for the uranium content. In a much later lawsuit, many miners were able to reclaim lost profits from the U.S. government. American ores had much lower uranium concentrations than the ore from the Belgian Congo, but they were pursued vigorously to ensure nuclear self-sufficiency. Similar efforts were undertaken in the Soviet Union, which did not have native stocks of uranium when it started developing its own atomic weapons program. Intensive exploration for uranium started after the end of World War II as a result of the military and civilian demand for uranium. There were three separate periods of uranium exploration or "booms." These were from 1956 to 1960, 1967 to 1971, and from 1976 to 1982. In the 20th century the United States was the world's largest uranium producer. Grants Uranium District in northwestern New Mexico was the largest United States uranium producer. The Gas Hills Uranium District, was the second largest uranium producer. The famous Lucky Mc Mine is located in the Gas Hills near Riverton, Wyoming. Canada has since surpassed the United States as the cumulative largest producer in the world. Types of uranium deposits Many different types of uranium deposits have been discovered and mined. There are mainly three types of uranium deposits including unconformity-type deposits, namely paleoplacer deposits and sandstone-type also known as roll front type deposits. Uranium deposits in sedimentary rock Uranium deposits in sedimentary rocks include those in sandstone (in Canada and the western US), Precambrian unconformities (in Canada), phosphate, Precambrian quartz-pebble conglomerate, collapse breccia pipes (see Arizona Breccia Pipe Uranium Mineralization), and calcrete. Sandstone uranium deposits are generally of two types. Roll-front type deposits occur at the boundary between the up dip and oxidized part of a sandstone body and the deeper down dip reduced part of a sandstone body. Peneconcordant sandstone uranium deposits, also called Colorado Plateau-type deposits, most often occur within generally oxidized sandstone bodies, often in localized reduced zones, such as in association with carbonized wood in the sandstone. Precambrian quartz-pebble conglomerate-type uranium deposits occur only in rocks older than two billion years old. The conglomerates also contain pyrite. These deposits have been mined in the Blind River-Elliot Lake district of Ontario, Canada, and from the gold-bearing Witwatersrand conglomerates of South Africa. Igneous or hydrothermal uranium deposits Hydrothermal uranium deposits encompass the vein-type uranium ores. Igneous deposits include nepheline syenite intrusives at Ilimaussaq, Greenland; the disseminated uranium deposit at Rossing, Namibia; and uranium-bearing pegmatites. Disseminated deposits are also found in the states of Washington and Alaska in the US. Uranium prospecting is similar to other forms of mineral exploration with the exception of some specialized instruments for detecting the presence of radioactive isotopes. The Geiger counter was the original radiation detector, recording the total count rate from all energy levels of radiation. Ionization chambers and Geiger counters were first adapted for field use in the 1930s. The first transportable Geiger–Müller counter (weighing 25 kg) was constructed at the University of British Columbia in 1932. H.V. Ellsworth of the GSC built a lighter weight, more practical unit in 1934. Subsequent models were the principal instruments used for uranium prospecting for many years, until geiger counters were replaced by scintillation counters. The use of airborne detectors to prospect for radioactive minerals was first proposed by G.C. Ridland, a geophysicist working at Port Radium in 1943. In 1947, the earliest recorded trial of airborne radiation detectors (ionization chambers and Geiger counters) was conducted by Eldorado Mining and Refining Limited. (a Canadian Crown Corporation since sold to become Cameco Corporation). The first patent for a portable gamma-ray spectrometer was filed by Professors Pringle, Roulston & Brownell of the University of Manitoba in 1949, the same year as they tested the first portable scintillation counter on the ground and in the air in northern Saskatchewan. Airborne gamma-ray spectrometry is now the accepted leading technique for uranium prospecting with worldwide applications for geological mapping, mineral exploration & environmental monitoring. A deposit of uranium, discovered by geophysical techniques, is evaluated and sampled to determine the amounts of uranium materials that are extractable at specified costs from the deposit. Uranium reserves are the amounts of ore that are estimated to be recoverable at stated costs. In open pit mining, overburden is removed by drilling and blasting to expose the ore body, which is then mined by blasting and excavation using loaders and dump trucks. Workers spend much time in enclosed cabins thus limiting exposure to radiation. Water is extensively used to suppress airborne dust levels. Underground uranium mining If the uranium is too far below the surface for open pit mining, an underground mine might be used with tunnels and shafts dug to access and remove uranium ore. There is less waste material removed from underground mines than open pit mines, however this type of mining exposes underground workers to the highest levels of radon gas. Underground uranium mining is in principle no different to any other hard rock mining and other ores are often mined in association (e.g., copper, gold, silver). Once the ore body has been identified a shaft is sunk in the vicinity of the ore veins, and crosscuts are driven horizontally to the veins at various levels, usually every 100 to 150 metres. Similar tunnels, known as drifts, are driven along the ore veins from the crosscut. To extract the ore, the next step is to drive tunnels, known as raises when driven upwards and winzes when driven downwards through the deposit from level to level. Raises are subsequently used to develop the stopes where the ore is mined from the veins. The stope, which is the workshop of the mine, is the excavation from which the ore is extracted. Two methods of stope mining are commonly used. In the "cut and fill" or open stoping method, the space remaining following removal of ore after blasting is filled with waste rock and cement. In the "shrinkage" method, only sufficient broken ore is removed via the chutes below to allow miners working from the top of the pile to drill and blast the next layer to be broken off, eventually leaving a large hole. Another method, known as room and pillar, is used for thinner, flatter ore bodies. In this method the ore body is first divided into blocks by intersecting drives, removing ore while so doing, and then systematically removing the blocks, leaving enough ore for roof support. Heap leaching is a process by which chemicals (usually sulfuric acid) are used to extract the economic element from the ore. Heap leaching is generally only economically feasible only for oxide ore deposits. Oxidation of sulfide deposits occur during the geological process called weatherization. Therefore oxide ore deposits are typically found close to the surface. If there are no other economic elements within the ore a mine might choose to extract the Uranium using a leaching agent, usually a low molar sulfuric acid. If the economic and geological conditions are right, the mining company will level large areas of land with a small gradient, layering it with thick plastic (usually HDPE or LLDPE), sometimes with clay, silt or sand beneath the plastic liner. The extracted ore will typically be run through a crusher and placed in heaps atop the plastic. The leaching agent will then be sprayed on the ore for 30–90 days. As the leaching agent filters through the heap the Uranium will break its bonds with the oxide rock and enter the solution. The solution will then filter along the gradient into collecting pools which will then be pumped to on-site plants for further processing. Only some of the Uranium (commonly about 70%) is actually extracted. The Uranium concentrations within the solution are very important for the efficient separation of pure uranium from the acid. As different heaps will yield different concentrations the solution is pumped to a mixing plant that is carefully monitored. The properly balanced solution is then pumped into a processing plant where the Uranium is separated from the sulfuric acid. Heap leach is significantly cheaper than traditional milling processes. The low costs allow for lower grade ore to be economically feasible (given that it is the right type of ore body). Environmental law requires that the surrounding ground water is continually monitored for possible contamination. The mine will also have to have continued monitoring even after the shutdown of the mine. In the past mining companies would sometimes go bankrupt, leaving the responsibility of mine reclamation to the public. Recent additions to the mining law require that companies set aside the money for reclamation before the beginning of the project. The money will be held by the public to insure adherence to environmental standards if the company were to ever go bankrupt. Another very similar mining technique is called in situ, or in place mining where the ore doesn't even need extracting. In-situ leaching (ISL), also known as solution mining, or in-situ recovery (ISR) in North America, involves leaving the ore where it is in the ground, and recovering the minerals from it by dissolving them and pumping the pregnant solution to the surface where the minerals can be recovered. Consequently there is little surface disturbance and no tailings or waste rock generated. However, the orebody needs to be permeable to the liquids used, and located so that they do not contaminate ground water away from the orebody. Uranium ISL uses the native groundwater in the orebody which is fortified with a complexing agent and in most cases an oxidant. It is then pumped through the underground orebody to recover the minerals in it by leaching. Once the pregnant solution is returned to the surface, the uranium is recovered in much the same way as in any other uranium plant (mill). In Australian ISL mines (Beverley and the soon to be opened Honeymoon Mine) the oxidant used is hydrogen peroxide and the complexing agent sulfuric acid. Kazakh ISL mines generally do not employ an oxidant but use much higher acid concentrations in the circulating solutions. ISL mines in the USA use an alkali leach due to the presence of significant quantities of acid-consuming minerals such as gypsum and limestone in the host aquifers. Any more than a few percent carbonate minerals means that alkali leach must be used in preference to the more efficient acid leach The Australian government has published a best practice guide for in situ leach mining of uranium, which is being revised to take account of international differences. Recovery from seawater The uranium concentration of sea water is low, approximately 3.3 mg per cubic meter of seawater (3.3 ppb). But the quantity of this resource is gigantic and some scientists believe this resource is practically limitless with respect to world-wide demand. That is to say, if even a portion of the uranium in seawater could be used the entire world's nuclear power generation fuel could be provided over a long time period. Some anti-nuclear proponents claim this statistic is exaggerated. Although research and development for recovery of this low-concentration element by inorganic adsorbents such as titanium oxide compounds, has occurred since the 1960s in the United Kingdom, France, Germany, and Japan, this research was halted due to low recovery efficiency. At the Takasaki Radiation Chemistry Research Establishment of the Japan Atomic Energy Research Institute (JAERI Takasaki Research Establishment), research and development has continued culminating in the production of adsorbent by irradiation of polymer fiber. Adsorbents have been synthesized that have a functional group (amidoxime group) that selectively adsorbs heavy metals, and the performance of such adsorbents has been improved. Uranium adsorption capacity of the polymer fiber adsorbent is high, approximately tenfold greater in comparison to the conventional titanium oxide adsorbent. One method of extracting uranium from seawater is using a uranium-specific nonwoven fabric as an absorbent. The total amount of uranium recovered from three collection boxes containing 350 kg of fabric was >1 kg of yellowcake after 240 days of submersion in the ocean. According to the OECD, uranium may be extracted from seawater using this method for about $700/kg-U. The experiment by Seko et al. was repeated by Tamada et al. in 2006. They found that the cost varied from ¥15,000 to ¥88,000 (Yen) depending on assumptions and "The lowest cost attainable now is ¥25,000 with 4g-U/kg-adsorbent used in the sea area of Okinawa, with 18 repetitionuses [sic]." With the May, 2008 exchange rate, this was about $240/kg-U. Since 1981 uranium prices and quantities in the US are reported by the Department of Energy. The import price dropped from 32.90 US$/lb-U3O8 in 1981 down to 12.55 in 1990 and to below 10 US$/lb-U3O8 in the year 2000. Prices paid for uranium during the 1970s were higher, 43 US$/lb-U3O8 is reported as the selling price for Australian uranium in 1978 by the Nuclear Information Centre. Uranium prices reached an all-time low in 2001, costing US$7/lb, but has since rebounded strongly. In April 2007 the price of Uranium on the spot market rose to US$113.00/lb, This is very close to the all time high (adjusted for inflation) in 1977. a high point of the uranium bubble of 2007. The higher price has spurred expansion of current mines, construction of new mines and reopening of old mines as well as new prospecting. Politics of uranium mining In the beginning of the Cold War, to ensure adequate supplies of uranium for national defense, the United States Congress passed the U.S. Atomic Energy Act of 1946, creating the Atomic Energy Commission (AEC) which had the power to withdraw prospective uranium mining land from public purchase, and also to manipulate the price of uranium to meet national needs. By setting a high price for uranium ore, the AEC created a uranium "boom" in the early 1950s, which attracted many prospectors to the Four Corners region of the country. Moab, Utah became known as the Uranium-capital of the world, when geologist Charles Steen discovered such an ore in 1952, even though American ore sources were considerably less potent than those in the Belgian Congo or South Africa. In the 1950s methods for extracting diluted uranium and thorium, found in abundance in granite or seawater, were pursued. Scientists speculated that, used in a breeder reactor, these materials would potentially provide limitless source of energy. American military requirements declined in the 1960s, and the government completed its uranium procurement program by the end of 1970. Simultaneously, a new market emerged: commercial nuclear power plants. However, in the U.S. this market virtually collapsed by the end of the 1970s as a result of industrial strains caused by the energy crisis, popular opposition, and finally the Three Mile Island nuclear accident in 1979, all of which led to a de facto moratorium on the development of new nuclear reactor power stations. In Europe a mixed situation exists. Considerable nuclear power capacities have been developed, notably in Belgium, France, Germany, Spain, Sweden, Switzerland and the UK. In many countries development of nuclear power has been stopped and phased out by legal actions. In Italy the use of nuclear power was barred by a referendum in 1987, however this is now under revision. Ireland also has no plans to change its non-nuclear stance and pursue nuclear power in the future. Opposition to uranium mining has been considerable in Australia, where notable anti-uranium activists have included Kevin Buzzacott, Jacqui Katona, Yvonne Margarula, and Jillian Marsh. Other notable anti-uranium activists include Manuel Pino (USA), JoAnn Tall (USA), and Sun Xiaodi (China). Health risks of uranium mining Lung cancer deaths Uranium ore emits radon gas. The health effects of high exposure to radon is a particular problem in the mining of uranium; significant excess lung cancer deaths have been identified in epidemiological studies of uranium miners employed in the 1940s and 1950s. The first major studies with radon and health occurred in the context of uranium mining, first in the Joachimsthal region of Bohemia and then in the Southwestern United States during the early Cold War. Because radon is a product of the radioactive decay of uranium, underground uranium mines may have high concentrations of radon. Many uranium miners in the Four Corners region contracted lung cancer and other pathologies as a result of high levels of exposure to radon in the mid-1950s. The increased incidence of lung cancer was particularly pronounced among Native American and Mormon miners, because those groups normally have low rates of lung cancer. Safety standards requiring expensive ventilation were not widely implemented or policed during this period. In studies of uranium miners, workers exposed to radon levels of 50 to 150 picocuries of radon per liter of air (2000–6000 Bq/m3) for about 10 years have shown an increased frequency of lung cancer. Statistically significant excesses in lung cancer deaths were present after cumulative exposures of less than 50 WLM. There is, however, unexplained heterogeneity in these results (whose confidence interval do not always overlap). The size of the radon-related increase in lung cancer risk varied by more than an order of magnitude between the different studies. Since that time, ventilation and other measures have been used to reduce radon levels in most affected mines that continue to operate. In recent years, the average annual exposure of uranium miners has fallen to levels similar to the concentrations inhaled in some homes. This has reduced the risk of occupationally induced cancer from radon, although it still remains an issue both for those who are currently employed in affected mines and for those who have been employed in the past. The power to detect any excess risks in miners nowadays is likely to be small, exposures being much smaller than in the early years of mining. In January 2008 Areva won the Public Eye Awards: negative awards for irresponsible, profit-driven environmental or social behaviour. The French state-owned company mines uranium in northern Niger where, according to the Public Eye Awards, mine workers are not informed about health risks, and analysis shows radioactive contamination of air, water and soil. The local organization that represents the mine workers, spoke of "suspicious deaths among the workers, caused by radioactive dust and contaminated groundwater." Despite efforts made in cleaning up uranium sites, significant problems stemming from the legacy of uranium development still exist today on the Navajo Nation and in the states of Utah, Colorado, New Mexico, and Arizona. Hundreds of abandoned mines have not been cleaned up and present environmental and health risks in many communities. At the request of the U.S. House Committee on Oversight and Government Reform in October 2007, and in consultation with the Navajo Nation, the Environmental Protection Agency (EPA), along with the Bureau of Indian Affairs (BIA), the Nuclear Regulatory Commission (NRC), the Department of Energy (DOE), and the Indian Health Service (IHS), developed a coordinated Five-Year Plan to address uranium contamination. Similar interagency coordination efforts are beginning in the State of New Mexico as well. Production in Australia rose significantly to 10,115 tU3O8 (22.3 million pounds) in 2007 from 19.7 million pounds in 2006, securing its position as the second largest uranium producing country, most of the production gain coming from increased operational performance and an increase in the grade of the ore mined. Australia has the world's largest uranium reserves, 24% of the planet's known reserves. The majority of these reserves are located in South Australia with other important deposits in Queensland, Western Australia and the Northern Territory. The Olympic Dam operation run by BHP Billiton in South Australia is combined with mining of copper, gold, and silver, and has reserves of global significance. There are currently three operating uranium mines in Australia, and several more have been proposed. The expansion of Australia's uranium mines is supported by the Federal Australian Labor Party (ALP) Government headed by Prime Minister Julia Gillard. The ALP abandoned its long-standing and controversial "no new uranium mines" policy in April 2007. One of the more controversial proposals was Jabiluka, to be built surrounded by the World Heritage listed Kakadu National Park. The existing Ranger Uranium Mine is also surrounded by the National Park, as the mine area was not included in the original listing of the Park. For many years Canada was the largest exporter of uranium ore, however in 2009 the top spot was taken over by Kazakhstan. The largest Canadian mines are located in the Athabasca Basin of northern Saskatchewan. Canada's first uranium discovery was in the Alona Bay area, south of Lake Superior Provincial Park in Ontario, by Dr. John Le Conte in 1847. The Canadian uranium industry, however, really began with the 1932 discovery of pitchblende at Port Radium, Northwest Territories. The deposit was mined from 1933 to 1940, for radium, silver, copper, and cobalt. The mine shut down in 1940, but was reopened in 1942 by Eldorado Mining and Refining Limited to supply uranium to the Manhattan Project. The Canadian government expropriated the Port Radium mine and banned private claimstaking and mining of radioactive minerals. In 1947 the government lifted the ban on private uranium mining, and the industry boomed through the 1950s, spurred by high prices due to the nuclear weapons programs. Production peaked in 1959, when twenty-three mines in five different districts made uranium Canada's number-one export. That same year, however, the United Kingdom and the United States announced their intention to halt uranium purchases in 1963. By 1963, seven mines were left operating, a number that shrank to only three in 1972. A price rise caused uranium to boom again in 1975 and 2005. In 1948, prospector Robert Campbell discovered pitchblende at Theano Point, in the area of Alona Bay, Ontario, and staked 30 claims. By November 1948 a rush had begun, and in the next three years, 5,000 claims would be staked in the area. A shaft and headframe were constructed, but abandoned before operations could begin; the mine proved unprofitable after uranium discoveries at Elliot Lake, Ontario. Uranium was discovered at Blind River-Elliot Lake area in 1949, and production began in 1955. The deposits are in Precambrian quartz-pebble conglomerates, similar to uranium deposits in Brazil and South Africa. Pitchblende veins were discovered near Beaverlodge Lake, Saskatchewan in 1935, and uranium mining started in 1953. Today the Athabasca Basin in northern Saskatchewan hosts the largest high-grade uranium mines and deposits. Cameco, the world's largest low-cost uranium producer, which accounts for 18% of the world's uranium production, operates three mines and one dedicated mill in the region. Among the major mines are Cameco's flagship McArthur River mine, the developing Cigar Lake mine, the Rabbit Lake mine and mill complex, and the world's largest uranium mill at Key Lake. French-owned uranium syndicate Areva also operates the McClean Lake mill. Most of these mines are joint ventures between Cameco, Areva, and various other joint venture shareholders. Future mines currently in early development stages include Areva's Midwest Project (near McClean Lake), and Cameco's Millennium Project (near Key Lake). As of 2007, with uranium spot market prices well over the $100 USD/lb mark, Saskatchewan has become a hotbed of uranium exploration, with many junior exploration companies rushing to explore the highly valuable Athabasca basin. Most uranium ore in the United States comes from deposits in sandstone, which tend to be of lower grade than those of Australia and Canada. Because of the lower grade, many uranium deposits in the United States became uneconomic when the price of uranium declined sharply in the 1980s. Regular production of uranium-bearing ore in the United States began in 1898 with the mining of carnotite-bearing sandstones of the Colorado Plateau in Colorado and Utah, for their vanadium content. The discovery of radium by Marie Curie, also in 1898, soon made the ore also valuable for radium. Uranium was a byproduct. By 1913, the Colorado Plateau uranium-vanadium province was supplying about half of the world supply of radium. Production declined sharply after 1923, when low-cost competition from radium from the Belgian Congo and vanadium from Peru made the Colorado Plateau ores uneconomic. Mining revived in the 1930s with higher prices for vanadium. American uranium ores were in very high demand by the Manhattan Project during World War II, although the mining companies did not know that the by-product uranium was suddenly valuable. The late 1940s and early 1950s saw a boom in uranium mining in the western US, spurred by the fortunes made by prospectors such as Charlie Steen. Uranium mining declined with the last open pit mine (Shirley Basin, Wyoming) shutting down in 1992. United States production occurred in the following states (in descending order): New Mexico, Wyoming, Colorado, Utah, Texas, Arizona, Florida, Washington, and South Dakota. The collapse of uranium prices caused all conventional mining to cease by 1992. In-situ leach mining has continued primarily in Wyoming and adjacent Nebraska as well has recently restarted in Texas. Rising uranium prices since 2003 have increased interest in uranium mining in the United States. On Wednesday 25 June 2008 the House Natural Resources Committee voted overwhelmingly to enact emergency protections from uranium mining for 1,000,000 acres (4,000 km2) of public lands around Grand Canyon National Park. This will mean the Secretary of the Interior has an obligation to protect public lands near the Grand Canyon from uranium extraction for three years. The Center for Biological Diversity, Sierra Club, and the Grand Canyon Trust recently won a court order against the Kaibab National Forest stopping uranium drilling near the national park until a thorough environmental analysis is conducted. The Grand Canyon Watersheds Protection Act has been proposed. This bill would permanently ban uranium mining in the area. The impacts of uranium development have raised concerns of scientists and government officials alike. Due to increasing demand, uranium projects have been on the increase, raising concerns about water, public health, and fragile desert ecosystems. Kazakhstan produced some 7847 tU3O8 (17.3 million pounds) in 2007, much more than in 2006. Kazatomprom's four 100%-owned ISR mining groups (LLP Kazatomprom) combined produced half of the total output. The World Nuclear Association[unreliable source?] states that Russia has known uranium deposits of 500,000 tonnes and plans to mine 11,000 to 12,000 tonnes per year from deposits in the South Urals, Western Siberia, and Siberia east of Lake Baikal, by 2010. The Russian nuclear industry has been undergoing an overall restructuring process during 2007. The production was high as almost 4 000 tU3O8 (8.8 million pounds) from three operating mines in 2007. Atomredmetzoloto reported that the Priargunsky mine yielded 7.8 million pounds in 2007, a slight decline from the 8.2 million pounds reported by TVEL in 2006. At the Dalur (Dolmatovskoye) and Khiagda ISR mines, production of 910 000 pounds and 68 000 pounds, respectively, was reached in 2007. Both ISR projects are expected to increase production steadily through 2015. European uranium mining supplied just below 3% of the total EU needs, coming from the Czech Republic and Romania (a total of 526 tU). Production in the Rožňa mine was to be terminated in 2008, but the Czech Government decided in May 2007 to continue mining and extended the lifetime without time limit as long as it remains profitable. Bulgaria shut down its facilities for environmental reasons in 1992; terrains were recultivated but recently, there has been certain interest in resuming activities. Industrial mining first started in 1938 and was resumed after 1944 by a joint Soviet-Bulgarian mining company, reorganized in 1956 into the Redki Metali (Rare Metals) government-owned concern. At its peak, it had thirteen thousand employees, operated forty-eight uranium mines and two enrichment plants at Buhovo outside Sofia and Eleshnitsa near Bansko. Yearly production was estimated at 645 t that met about 55% of the needs of Kozloduy Nuclear Power Plant, which had six reactors with a total output of over 3600 MWe at its peak. The Czech Republic is the birthplace of industrial scale uranium mining. Uranium mining at Jáchymov (at that time named Joachimsthal and belonging to Austria-Hungary) started in the 1890s on an industrial scale, after the silver and cobalt production of the deposit declined. Uranium was first utilised to produce mainly yellow colours for glass and porcelain manufacture. After the Curies in France discovered the polonium and radium in tailings from Jáchymov, the town became the first place in the world for commercial radium production from uranium ore. Radioactive water from the mines was also used to set up a health resort still existing today for radon-treatments. Pre–Cold War production is estimated to be around 1,000 t of uranium. From 1947 on the Czech Republic started producing uranium for the Soviet Union. Early mining sites such as Jáchymov, Horní Slavkov and Příbram became known as parts of the "Czech Gulag". In the whole, the Czech Republic produced 110.000 t of uranium to 1992 from 64 uranium deposits. The largest deposit Příbram (vein style) produced about 50.000 t of uranium and was mined to a depth of over 1,800 m. Today, the Rožná underground facility 55 km northwest of Brno is Europe's only operating uranium mine, continuously operating since 1957. It produces about 300 t of uranium annually. Since 2007, the Australian company Uran Ltd. is interested in participating in the operations at Rožná, as well as seeking permits with the Czech Ministry of Trade and Resources to open mines in the Czech Republic at other known locations, such as Brzkov, Jamné, Polná and Věžnice, through its Czech partner Timex Zdice and since 2008 through its subsidiary Urania Mining. In addition, Talvivaara Mining Company plc has announced in early 2010 the commencement of uranium recovery as a by-product out of its mine producing mainly nickel, copper, zinc and cobalt in Sotkamo, eastern Finland. Production is expected to be approximately 350 tons of yellowcake annually, making Finland almost self-sufficient in uranium, accounting for approximately 80% of annual demand. However, as Finland lacks the required reprocessing facilities to convert yellowcake into nuclear fuel, the mine's output will need to be sent abroad for reprocessing and enrichment. The search for uranium ore intensified during the cold war, but only in East Germany was an extensive uranium mining industry established. Uranium was mined from 1947 to 1990 from mines in Saxony and Thuringia by the SDAG Wismut. All the uranium mines were closed after the German reunification for economic and environmental reasons. Total production in East Germany was 230.400 t of uranium, making it the third largest producer in history behind the USA and Canada. A minor production still takes place at the Königstein mine southeast of Dresden from cleaning of mine water. This production has been 38 t of uranium in 2007. In Hungary uranium mining began in the 1950s around Pécs to supply the country's first atomic plant in Paks. A whole district was built for the mining industry on the outskirt of Pécs, for which the name Uránváros (Uranium city) was given. After the fall of communism, uranium mining was gradually given up because of the high production costs. That caused serious economic problems and a rise of unemployment in Pécs. Recently an Australian company took up the challenge to search for uranium in the Mecsek. Uranium was formerly mined in the Novoveská Huta near Spišská Nová Ves from stratiform deposits. A mine for the extraction of uranium ore was established in the hills of Jahodna near the city of Košice. Tournigan Energy is mining Uranium at the Kuriskova mine, near to Košice. Several other uranium deposits are found in the Považský Inovec Mts. near Kálnica, in the area of Petrova Hora near Krompachy and in the Vikartovský chrbát in Kozie chrbty Mts.. None of them is extracted. The Australian Berkeley Resources Ltd. and Korea Electric Power mine Uranium in the Salamanca Province, near the city of Ciudad Rodrigo. Berkeley Resources is also active in the Cáceres (province), the Barcelona Province and the Guadualajara Province. In Sweden, uranium production took place at Ranstadsverket between 1965 and 1969 by mining of alum shale (kind of oil shale) deposits. The goal was to make Sweden self-supplying with uranium. The high operating costs of the pilot plant (heap leaching) due to the low concentration of uranium in the shale and the availability at that time of comparatively cheap uranium on the world market caused the mine to be closed, although a much cheaper and more efficient leaching process, using sulfur-consuming bacteria, had by then been developed. Since 2005 there have been investigations on opening new uranium mines in Sweden. The South Terras Mine in Cornwall was mined for uranium from 1873 to 1903. Substantial uranium deposits were found on Orkney in the 1970s. When Margaret Thatcher proposed a uranium mine on Orkney a campaign followed which successfully argued that uranium mining would mean irreversible environmental, social and psychological damage. Democratic Republic of the Congo (DRC) In the DRC uranium is being won. The uranium for the nuclear bombs which were used to bomb Japan at the end of the Second World War came from - then - Belgian Congo. The mining occurs in the mineral rich province of Katanga, for example in Shinkolobwe, Mindigi, Kalongwe, Kasompi, Samboa and the Emmanuel Depot in Kolwezi. Major player is Gécamines, the state mining company. Namibia produces uranium at Rossing deposit, where an igneous deposit is mined from one of the world's largest open pit mines. The mine is owned by a subsidiary of the Rio Tinto Group. The Langer Heinrich calcrete uranium deposit was discovered in 1973 and the open pit mine was officially opened in 2007. In 2007, production in Niger had a total output of 3720 tonnes U3O8 (8.2 million pounds) coming mainly from the Akouta (Cominak) and the Arlit (Somair) mines. Niger's uranium came to world attention before the US invasion of Iraq, when it was asserted that Iraq had attempted to buy uranium from Niger (see Niger uranium forgeries). China mined in 2007 636 tonnes of U3O8, a decrease of 17% of its production in 2006. In Nalgonda District, the Rajiv Gandhi Tiger Reserve (the only tiger project in Andhra Pradesh) has been forced to surrender over 3,000 sq. kilometres to uranium mining following a directive from the Central Ministry of Environment and Forests. In 2007, India was able to extract 229 tonnes of U3O8 from its soil. On July 19 of 2011, Indian officials announced that the Tumalapalli mine in Andhra Pradesh state of India could provide more than 170,000 tonnes of uranium, making it as the world's largest uranium mine.Production of the ore is slated to begin from next year. As India vie for enriched uranium from the Nuclear Suppliers Group (NSG) members to get the raw material for its nuclear power plants, the scientists here have found massive uranium deposits in the mines of Tumalapalli in Andhra Pradesh. The site has the potential to emerge as the largest reserve of the key nuclear fuel in the world. The Department of Atomic Energy (DAE) recently discovered that the upcoming mine in Tumalapalli has close to 49,000 tonne uranium reserve. This could just be a shot in the arm for India's nuclear power aspirations as it is three times the original estimate of the area's deposits. In fact, there were indications that the total quantity of uranium could go up to 1.5 lakh tonnes, which would make it among the largest uranium mines in the world. The fact that Tumalapalli might have uranium reserves has been known for a while, but it took four years for the estimate to come to the present level. Jordan, the only Middle East country with confirmed uranium, is estimated to have around 140,000 tonnes in its uranium reserves plus a further 59,000 tonnes in phosphate deposits. Although no uranium has been mined yet, it was announced in 2008 that the Jordanian Government signed an agreement with the French Company AREVA to explore for uranium. This will benefit them on building a future nuclear plant in Jordan. - List of uranium mines - Nuclear fuel cycle - Peak uranium - Radiation poisoning - Radioactive contamination - Uranium market - Uranium metallurgy - Uranium mining debate - Uranium mining controversy in Kakadu National Park - Uranium reserves - ^ "World Uranium Mining". World Nuclear Association. http://www.world-nuclear.org/info/inf23.html. Retrieved 2010-06-11. - ^ "Uranium resources sufficient to meet projected nuclear energy requirements long into the future". Nuclear Energy Agency (NEA). 3 June 2008. http://www.nea.fr/html/general/press/2008/2008-02.html. Retrieved 2008-06-16. "Uranium 2007: Resources, Production and Demand, also known as the Red Book, estimates the identified amount of conventional uranium resources which can be mined for less than USD 130/kg to be about 5.5 million tonnes, up from the 4.7 million tonnes reported in 2005. However, these estimates may be somewhat optimistic, because they do not include some costs of development, such as sunk costs for exploration and land acquisition, income taxes, profit, and the cost of money. Undiscovered resources, i.e. uranium deposits that can be expected to be found based on the geological characteristics of already discovered resources, have also risen to 10.5 million tonnes. This is an increase of 0.5 million tonnes compared to the previous edition of the report. The increases are due to both new discoveries and re-evaluations of known resources, encouraged by higher prices." - ^ Franz J. Dahlkamp (1993) Uranium ore deposits Springer-Verlag, Berlin, 460 p. ISBN 3-540-53264-1. - ^ a b c d Chaki, Sanjib; Foutes, Elliot; Ghose, Shankar; Littleton, Brian; Mackinney, John; Schultheisz, Daniel; Schuknecht, Mark; Setlow, Loren et al. (January 2006) (PDF). Technologically Enhanced Naturally Occurring Radioactive Materials From Uranium Mining. 1: "Mining and Reclamation Background". Washington, D.C.: US Environmental Protection Agency Office of Radiation and Indoor Air Radiation Protection Division. pp. 1–8 to 1--9. http://www.epa.gov/radiation/docs/tenorm/402-r-05-007.pdf. - ^ http://www.world-nuclear.org/info/inf27.html - ^ "Presidential Committee recommends research on uranium recovery from seawater" (link to PDF). The President's Committee Of Advisors On Science And Technology. August 2, 1999. http://www.wise-uranium.org/upusa.html#SEAWATER. Retrieved 2008-05-10. "... this resource ... could support for 6,500 years 3,000 GW of nuclear capacity ... Research on a process being developed in Japan suggests that it might be feasible to recover uranium from seawater at a cost of $120 per lb of U3O8. Although this is more than double the current uranium price, it would contribute just 0.5¢ per kWh to the cost of electricity for a next-generation reactor operated on a once-through fuel cycle—..." - ^ "Nuclear power - the energy balance; Part D, Uranium". October 2007. http://www.stormsmith.nl/report20071013/partD.pdf. - ^ Noriaki Seko, Akio Katakai, Shin Hasegawa, Masao Tamada, Noboru Kasai, Hayato Takeda, Takanobu Sugo, Kyoichi Saito (November 2003). "Aquaculture of Uranium in Seawater by a Fabric-Adsorbent Submerged System". Nuclear Technology (American Nuclear Society) 144 (2). http://www.ans.org/pubs/journals/nt/va-144-2-274-278. Retrieved 2008-04-30. - ^ "Uranium Resources 2003: Resources, Production and Demand" (PDF). OECD World Nuclear Agency and International Atomic Energy Agency. 2008-03. p. 22. http://www.neutron.kth.se/courses/reactor_physics/NEA-redbook2003.pdf. Retrieved 2008-04-23. - ^ Tamada M. et al. (2006) (in Japanese, translated into English). Cost Estimation of Uranium Recovery from Seawater with System of Braid type Adsorbent. 5. Nippon Genshiryoku Gakkai Wabun Ronbunshi. pp. 358–363. http://jolisfukyu.tokai-sc.jaea.go.jp/fukyu/mirai-en/2006/4_5.html. Retrieved 2008-05-10. - ^ "Table S1: Uranium Purchased by Owners and Operators of U.S. Civilian Nuclear Power Reactors". Uranium Marketing Annual Report. Energy Information Administration, U.S. DoE. May 16, 2007. http://www.eia.doe.gov/cneaf/nuclear/umar/summarytable1.html. Retrieved 2008-05-10. - ^ "Section 9: Nuclear Energy" (PDF). Energy Information Administration, U.S. DoE. http://www.eia.doe.gov/emeu/aer/pdf/pages/sec9.pdf. Retrieved 2008-05-10. - ^ Seccombe, Allan (24 April 2007). "Uranium prices will correct soon". Miningmx.com. http://www.miningmx.com/energy/801287.htm. Retrieved 2008-05-10. - ^ "Constant 2007 US$ vs. Current US$ Spot U3O8 Prices". Ux Consulting Company, LLC. http://www.uxc.com/review/uxc_g_hist-price.html. Retrieved 2008-05-10. - ^ "Chapter 4: Olympian Feats". Oak Ridge National Laboratory Review. Oak Ridge National Laboratory, U.S. Dept. of Energy. http://www.ornl.gov/info/ornlreview/rev25-34/chapter4.shtm. Retrieved 2008-05-10. - ^ Rosenthal, Elisabeth (May 23, 2008). "Italy Embraces Nuclear Power". The New York Times. http://www.nytimes.com/2008/05/23/world/europe/23nuke.html?partner=rssnyt&emc=rss. Retrieved 2008-05-22. - ^ Department of Communications, Marine and Natural Resources (2007) [2007-03-12]. "Section 3. The Policy Framework." (PDF). Delivering A Sustainable Energy Future For Ireland. The Energy Policy Framework 2007-2020. Dublin: Department of Communications, Marine and Natural Resources. p. 25. ISBN 0-7557-7521-X. http://www.dcenr.gov.ie/NR/rdonlyres/54C78A1E-4E96-4E28-A77A-3226220DF2FC/27356/EnergyWhitePaper12March2007.pdf. Retrieved 2008-08-07. ""3.4.2. The Government will maintain the statutory prohibition on nuclear generation in Ireland. The Government believes that for reasons of security, safety, economic feasibility and system operation, nuclear generation is not an appropriate choice for this country. The Government will continue to articulate its strong position in relation to nuclear generation and transboundary safety concerns in Europe in the context of the EU Energy Strategy. Developments in relation to nuclear generation in the UK and other Member States will be closely monitored in terms of implications for Ireland."" - ^ Phil Mercer. Aborigines count cost of mine BBC News, 25 May 2004. - ^ Anti-uranium demos in Australia BBC World Service, 5 April 1998. - ^ Jennifer Thompson. Anti-nuke protests Green Left Weekly, 16 July 1997. - ^ Roscoe, R. J.; Steenland, K.; Halperin, W. E.; Beaumont, J. J.; Waxweiler, R. J. (1989-08-04). "Lung cancer mortality among nonsmoking uranium miners exposed to radon daughters". Journal of the American Medical Association 262 (5): 629. doi:10.1001/jama.262.5.629. PMID 2746814. http://jama.ama-assn.org/cgi/content/abstract/262/5/629. Retrieved 2008-06-26. - ^ "Uranium Miners' Cancer". Time. 1960-12-26. ISSN 0040-718X. http://www.time.com/time/magazine/article/0,9171,895156,00.html. Retrieved 2008-06-26. - ^ "Lung Cancer Risk Associated with Low Chronic Radon Exposure: Results from the French Uranium Miners Cohort and the European Project". http://net-science.irsn.fr/net-science/liblocal/docs/docs_DPHD/IRPA10-P2A-56.pdf. Retrieved 2009-07-07. - ^ Roscoe, R. J.; Deddens, J. A.; Salvan, A.; Schnorr, T. M. (1995). "Mortality among Navajo uranium miners". American Journal of Public Health 85 (4): 535. doi:10.2105/AJPH.85.4.535. PMC 1615135. PMID 7702118. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1615135. - ^ Mould, Richard Francis (1993). A Century of X-rays and Radioactivity in Medicine. CRC Press. ISBN 0750302240. - ^ a b Toxological profile for radon, Agency for Toxic Substances and Disease Registry, U.S. Public Health Service, In collaboration with U.S. Environmental Protection Agency, December 1990. - ^ "EPA Assessment of Risks from Radon in Homes". Office of Radiation and Indoor Air, US Environmental Protection Agency. June 2003. http://www.epa.gov/radon/pdfs/402-r-03-003.pdf. - ^ a b Darby, S; Hill, D; Doll, R (2005). "Radon: a likely carcinogen at all exposures". Ann. Oncol. 12 (10): 1341. doi:10.1023/A:1012518223463. PMID 11762803. - ^ "UNSCEAR 2006 Report Vol. I". United Nations Scientific Committee on the Effects of Atomic Radiation UNSCEAR 2006 Report to the General Assembly, with scientific annexes. http://www.unscear.org/unscear/en/publications/2006_1.html. - ^ Aldred, Jessica (January 22, 2008). "Awards shine spotlight on big business green record". London: The Guardian. http://www.guardian.co.uk/environment/2008/jan/22/corporatesocialresponsibility. Retrieved 2008-05-10. - ^ "Public Eye Denounces Areva and Glencore, Praises Hess Natur" (PDF). The Public Eye Awards. January 23, 2008. http://www.foeeurope.org/activities/Nuclear/pdf/2008/Public_Eye_Denounces_Areva.pdf. Retrieved 2008-05-10. - ^ Pasternak, Judy (2006-11-19). "A peril that dwelt among the Navajos". Los Angeles Times. http://www.latimes.com/news/nationworld/nation/la-na-navajo19nov19,0,1645689.story. - ^ a b c d e f g h i 2007 Annual report of the Euratom Supply Agency - ^ Australia's anti-nuclear movement: a short history (26 August 1998) by Jim Green, Green Left - ^ "Uranium in Canada". World Nuclear Association. August 2010. http://www.world-nuclear.org/info/inf49.html. Retrieved 2010-09-04. - ^ Nuffield, E. W., 1955, Geology of the Montreal River Area; Ontario Department of Mines, Volume LXIV, Part 3, Sixty-Fourth Annual Report. - ^ Carlie F. Banks (1976) Uranium and the Uranium Industry in Canada, Richardson, Tex.: Suntech Inc., p. 36–37. - ^ Chisholm, B., and Gutsche, A., Superior, Under the Shadow of the Gods, Lynx Images, 1998, p. 45. - ^ J. B. Mawdsley (1958) The radioactive pegmatities of Saskatchewan, in Proceedings of the Second United Nations International Conference on the Peaceful Uses of Atomic Energy, p. 484–490. - ^ Robert J. Wright and Donald L. Everhart (1960) Uranium, in Mineral Resources of Colorado First Sequel, Denver: Colorado Mineral Resources Board, p. 329–365. - ^ New uranium mining halted at Canyon - Interior Dept. questions order issued by House committee Ginger D. Richardson, The Arizona Republic(Jun. 26, 2008) - ^ "An overview of the Uranium Institute's 22nd Annual Symposium". Uranium Institute. 3–5 September 1997. http://220.127.116.11/search?q=cache:b6SdYpk2usoJ:www.world-nuclear.org/sym/1997/symrep97.htm+uranium+mining+Lake+Baikal&hl=en&ct=clnk&cd=4&gl=uk. Retrieved 2008-05-10. - ^ "World Nuclear Association". http://www.world-nuclear.org/. Retrieved 2008-05-10. - ^ See article in Czech: cs:Koncentrační tábory při československých uranových dolech. - ^ Uran stumbles in wooing towns. Victor Velek, The Prague Post. 7 May 2008. http://www.praguepost.com/articles/2008/05/07/uran-stumbles-in-wooing-towns.php. - ^ Uran Limited wants to open a new uranium mine in Havlickuv Brod district. Advantage Austria, Commercial Section of the Austrian Embassy in Prague. 17 April 2008. http://www.bruessel.austria.be/cz/news/local/uran-limited-bemueht-sich-um-uranabbau-in-der-vysocin.en.jsp. - ^ (PDF) Negotiations with government of Czech Republic regarding Rozna uranium mine. Uran Ltd.. 29 January 2007. http://www.uranlimited.com.au/documents/URA_CzechNegotiations_29-1-07.pdf. - ^ Dyni, John R. (2006) (PDF). Geology and resources of some world oil-shale deposits. Scientific Investigations Report 2005–5294. U.S. Department of the Interior. U.S. Geological Survey. http://pubs.usgs.gov/sir/2005/5294/pdf/sir5294_508.pdf. Retrieved 2008-10-25. - ^ Lippmaa, E.; Maremäe, E. (2000). "Uranium production from the local Dictyonema shale in North-East Estonia". Oil Shale. A Scientific-Technical Journal (Estonian Academy Publishers) 17 (4): 387–394. ISSN 0208-189X. - ^ Maremäe, E. (2001). "Extraction of uranium from local Dictyonema shale at Sillamäe in 1948–1952". Oil Shale. A Scientific-Technical Journal (Estonian Academy Publishers) 18 (3): 259–271. ISSN 0208-189X. - ^ Uranium Mining in Finland: Fighting Prospectors in the Nuclear Age - URANIUM MINING IN FINLAND By Renate Nimtz-Köster(05/29/2007)SPIEGEL ONLINE - ^ - Talvivaara to recover uranium as a by-product (09.02.2010) - ^ "World uranium mining". World Nuclear Association. http://www.world-nuclear.org/info/inf23.html. Retrieved 2008-12-08. - ^ Villages in Baranya County do not protest against uranium mining, 20-02-2008 - ^ http://diario.iol.pt/ambiente/nisa-uranio-quercus-ambiente-urgeirica/994254-4070.html - ^ About Us Uranium National Company S.A. - ^ http://www.nipne.ro/events/conferences/seminar_sck-cen/docs/CNUiulie009.pdf - ^ Košice does not want uranium mines(27 February 2006) - The Slovak Spectator - ^ http://www.tournigan.com/s/Kuriskova.asp - ^ Daniel, J., Mašlárová, I., Maslár, E., Daniel, V., Danielová, K., Miháľ, F., 2006, Záverečná správa Zhodnotenie geologických prác na U rudy vo vybraných oblastiach Západných Karpát na území Slovenska. Manuskript, Spišská Nová Ves, Archív URANPRES, 122 s. - ^ http://www.platts.com/RSSFeedDetailedNews.aspx?xmlpath=RSSFeed/HeadlineNews/Nuclear/6284618.xml - ^ http://www.berkeleyresources.com.au/projects-overview/ - ^ Cornwall Calling: South Terras Mine, Cornwall - ^ "Official Report 10 November 2005". Scottish Parliament. http://www.scottish.parliament.uk/business/officialReports/meetingsparliament/or-05/sor1110-02.htm. Retrieved 2008-05-10. - ^ "Peter Maxwell Davies". The Chamber Opera of Memphis. http://www.chamberoperamemphis.org/the_composer%20of%20The%20Medium.htm. Retrieved 2008-05-10. - ^ "DR Congo uranium mine collapses". BBC News. 2004-07-12. http://news.bbc.co.uk/2/hi/africa/3887373.stm. - ^ http://www.world-nuclear-news.org/ENF-Areva_and_DRC_to_cooperate_in_uranium_mining-2703094.html - ^ http://raid-uk.org/docs/ChinaAfrica/DRCCHINA%20report.pdf - ^ http://www.oenz.de/fileadmin/users/oenz/PDF/Studie/Uranium_Mining_in_the_DRC_OENZ_June_2011.pdf - ^ George J. Coakley (2004) Namibia, in Minerals Yearbook, Area Reports: International 2002, Africa and the Middle East, U.S. Geological Survey, p. 24.2. - ^ Langer Heinrich mine - ^ Thomas R. Yager (2004) Burkina Faso, Mauritania, and Niger, in Minerals Yearbook, Area Reports: International 2002, Africa and the Middle East, U.S. Geological Survey, p. 6.2. - ^ "Tiger reserve in Andhra made to shrink legally". Wildlifewatch.in. January 7, 2008. http://www.wildlifewatch.in/news/1004. Retrieved 2008-05-22. - ^ "India reveals 'world's biggest' uranium discovery". AFP. July 19, 2011. http://www.google.com/hostednews/afp/article/ALeqM5jADv2cDnhH_pZW5A4GXfXNK6V-YQ?docId=CNG.7e4a598f9713cc3df378c520326d27b1.531. - ^ Google translate: Symposium on nuclear reactors in the "atomic energy" - ^ "Jordan, France sign uranium exploration deal". Petra News Agency. http://www.petra.gov.jo/Public_News/Nws_NewsDetails.aspx?lang=2&site_id=1&NewsID=1346. - Impacts of Uranium Mining at Port Radium, NWT, Canada. - Health Impacts for Uranium Mine and Mill Residents - Science Issues. - Uranium mining left a legacy of death. - Biorecovery of Uranium from Minewaters into Pure Mineral Product at the Expense of Plant Wastes Advanced Materials Research Vols. 71-73 (2009) pp 621–624 - World Uranium Mining (giving production statistics), World Nuclear Association, July 2006 - Uranium resources and nuclear energy Energy Watch Group, December 2006 - Further explanation of ISL - Evaluation of Cost of Seawater Uranium Recovery and Technical Problems toward Implementation - Uranium Mining from the Handbook of Texas Online - Watch Uranium, a 1990 documentary on the risks of uranium mining - World Supply of Uranium — World Nuclear Association, March 2007 - The Guardian (22 Jan. 2008): Awards shine spotlight on big business green record - Southwest Research and Information Center: Shedding Light on Uranium Operations in Siberia - Sustainability of Uranium Mining and Milling: Toward Quantifying Resources and Eco-Efficiency - New Uranium Mining halted at Canyon - Uranium glows ever hotter (Investors Chronicle, UK) - Documentary about Uranium Mining in Australia - The Return of Navajo Boy, a documentary about Native Americans struggling with the legacy that uranium mining left on their land Wikimedia Foundation. 2010.
Then: In 2008, Chris Finberg, a Las Vegas Department of Field Operations employee, developed a preventive maintenance program which aimed to reduce roadway lifecycle costs by 66%. Created to benefit the city’s residential streets, the process had just been approved for a test run on more heavily traveled roads. Now: Finberg’s process of treating asphalt with a hydrophobic sand mixture to avert water intrusion is being used citywide. Filling the pavement with this composition restores the surface’s original grade, so less material needs to be uprooted for repairs, allowing for faster completion. It’s also so simple that any employee can perform repairs. The process is virtually the same as it was when it was introduced in 2008. Watch: Finberg talks about his projects for the city, here.
The term 'Textile' is a Latin word originating from theword 'texere' which means 'to weave' The history of textile is almost as old as that of humancivilization. In India the culture of silk was introduced in 400AD . modern textile industry took birth in India in the earlynineteenth century The first cotton textile mill of Bombay was established in1854 during the year 1900 the cotton textile industry was inbad state After independence, the cotton textile industry maderapid strides under the Plans
Romanji is a system of Japanese writing based on the Phoenician alphabet. Romanji uses Western writing, the same as English spelling, to transcribe the Japanese language. Romanji is commonly used to teach Westerners Japanese or to give examples of pronunciation, but it still follows the conventions of Japanese, including rules for when and how it is used. Receive exclusive access to members-only content, invitations to in-person and virtual events, and sneak peeks at new tools and emerging technologies. Join today!Join Now! - How to fuel creativity in the workplace - WHAT STORYTELLING MEANS TO VOICE AND TONE STRATEGY - Viewpoint: Patient-focused Healthcare Information Requires an Omnichannel Content Strategy - Viewpoint: The Disconnect Between Content Strategy, Information Science, and Healthcare - The Need for Plain Language in “Terms and Conditions”
By idea identification or a creativity technology one understands generally the purposeful producing of new ideas for the purpose of a problem solution. In particular within the range of the economy the term used in the context of innovation Workshops or innovation projects. Methods, which are suitable to specify problems are to be accelerated the idea identification and the idea river of particulars or from groups to, the search direction to extend and mental blockades dissolve. For badly structured, open problems the number and kind of the possible solutions are not given; each result of the solution process is only a relatively optimal solution at a certain time. With the use of creativity techniques will creativity lively, in order to find completely new, yet not realized solutions. Colloquially the designation is used "idea identification ", the emphasis is however rather on generating new ideas, as in looking for/finding (already existing) ideas (generation of idea). Contrary to the rather coincidental "brainstorm "one understands purposeful producing from ideas to a defined time by idea identification. For the idea identification numerous methods were developed. These methods are not algorithms, which lead correct "result to one "(like e.g. 3. ) Heuristics, i.e. process steps, which proved in practice as goal-prominently and with each application again different results to supply, separate root from 9. The most well-known is the brainstorming, which was developed into the 1950er years in the USA von Alex Osborn and since that time as epitome of the idea identification is understood. Idea identification methods are suitable only for problems, for which the solution method is still unknown (so-called "bad-structured "problems), not for problems, for which there is a well-known solution method (so-called "probably-structured "problems). Quality and quantity of the ideas are dependent on the task, the applied method, the participants and in particular from their internal attitude. The results are before not well-known. The quality is increased, if the participants use creative thinking strategies. There are public and internal seminars, which train these methods. Most methods are well-known as group methods, can be used however usually also of individuals. To the idea identification in this sense usually groups are formed for 14 participants, who use such a method of 7 -. Depending upon method such a idea identification meeting lasts between 30 and 60 minutes. The group has the advantage that not only a large number, but also a higher diversity can be attained from solution ideas to. The group composition should be as heterogeneous as possible therefore. So that the group can work effectively, usually a moderator is necessary for the idea identification, which knows the method and which participant trains accordingly. The methods supply usually first philosophies, which are selected then to idea concepts developed further and to be concretized to have and afterwards for realization (evaluation procedure and selection strategies). The creativity methods can be divided into intuitive and diskursive methods. Intuitive methods supply within a short time a great many ideas (in 30 minutes of 100 - 400 single ideas). They promote thought associations with the search for new ideas. They are appropriate for activation of the unconscious one; Knowledge, of which one does not think otherwise. These methods are to help to leave brought in thinking tracks. They activate the Potenzial of whole groups and put a broad basis, before with diskursiven methods one continues working. Most well-known probably is the brainstorming accomplished in the group, which in a multiplicity of variants one practices. The written form Brainwriting pulled again many folders tight. Another strand of the intuitive methods works with analogy and foreign methods, as solutions of a range are to supply appropriate ideas for another range, like the bionics. Diskursive methods supply 10 - 50 ideas to 30 minutes. They lead the process of the solution search systematically and consciously in individual, steps logically running off through (diskursiv = from term to term logically progressing). Such methods describe a problem completely, by being split up analytically into smallest units, as complete with morphologic box, its criteria and developments a problem clearly, and non-overlapping (English keyword MECE: mutually exclusive, completely exhaustive) to describe is. Likewise the relevance tree analysis, which becomes more precise from branch to branch. Besides entire creativity beginnings developed, which intuitive and diskursive elements unite: See also: Creativity, phases of the creative process, innovation, creative letter We found here 12 articles. » Industrial company teachings » Inquiry (economics) » Institutional Buy Out » Instrument valley function » Insurance principle » Interim management » Internal control » Internationally master OF Business administration » Internet economics We found here 4 related websites.
Nobel Peace Laureate Muhammad Yunus, the founder of Grameen Bank, otherwise known as the "Bank for the Poor," gave a special lecture on the theme of social business to over 20 educational institutes in India simultaneously. Yunus demonstrated the idea that the concept and principles of social business can be a primary method of easing poverty. He used this foundation for Grameen Bank, an institute that offers financial resources to even the poorest of the poor with no prior credit. The Wockhardt Foundation, a non-profit organization engaged in social service and human welfare, organized the lecture through live video conferencing. Offering the lecture through a web-based, online conferencing system made the game-changing presentation available to over 10,000 students in India. Not only did the students benefit from listening to the wise lessons from Yunus, but they also were given the opportunity to ask questions or provide comments following the lecture. Aspiring entrepreneurs could ask him how he built the foundation for Grameen Bank and how to get such an innovative and unique project off the ground with the support of others. Educators across the globe can use video conferencing to open up discussion and create fresh ideas to 10 times the number of students they would in a normal classroom.
Many more people are gaining interest in solar energy to power their homes these days. The best way to become an expert on solar power is to learn everything you can find about its true potential.The article is a great starting point. Photo-voltaic panels fall into one of two varieties. Poly-crystalline panels tend to be cheaper but they are generally less efficient than mono-crystalline panels. Make sure to get the most affordable and efficient product to power your choices before making a final decision. You don’t have to worry that you’ll be tearing up your whole roof to use solar power. You can make good use of solar powered lights everywhere outside your home. You must determine a storage plan for the energy you produce with your solar power system. A good battery is something that stores a lot of power for a long time or selling energy produced to utility companies can be great ideas. If you are thinking about leasing solar panels, make sure your contract offers you the ability to transfer your lease. You should have a back-up plan in the solar panels malfunction. You can use a generator or stay on the power grid. Be certain the solar panels so that they are able to gain maximum sun exposure year round. If you do not understand how to accomplish this, look at the direction the sun is in and how it changes through the seasons. Photovoltaic panels work most efficiently in areas with at least 5 hours of unobstructed sunlight. Solar heating can lower the cost of keeping a pool that is heated. Solar energy is a great alternative energy source since it hardly affects the environment.Now is the best time to check out this kind of energy. You need to know exactly how much you need on a daily basis before switching to clean energies. This can help you achieve a proper solar energy system. You probably should look at the year. You can be free of the grid when you use solar power. You can choose to be independent from the power companies in your area. You can say goodbye to your monthly electric bill and can even be paid by the electric company if you choose to remain independent. Bigger is not always better when considering solar panels. You should consider each of this before making your purchase. Pay attention to weather conditions before you initially begin considering a solar installation. Solar panels are a good option if there are five hours of direct sunlight a day in your area. You should think about other choices if your area is cloudy or very snowy in the winter. Get your solar energy system inspected twice yearly to make sure it’s working right. During a check, the technician can check connections, the angle of your panels are correct and the power inverter is working properly. Think about using a solar-powered water heater to solar energy.Water heaters make up a large portion of your total energy consumption. With up front costs considerably lower than full power systems, solar water heaters will show a quicker return on your investment. Avoid standing or walking on your installed solar panels. If you cannot avoid walking on them, step in the middle of the panel. Before purchasing solar panels from a company, be sure to perform a background check. You want to do business with companies who will likely to still be around in existence for many more years. This will ensure your warranty coverage should you can get any necessary parts later on. Remember that not all solar panels need to be installed on the roof of your home or business. Using adjustable mounts or sun tracking systems can ensure that your solar panels gain much more energy than regular mounted options. You can earn extra money from a solar energy.Whether you have one for your home or business, your investment will begin the pay off immediately. Solar energy is not a failing proposition just because certain companies have failed.It will save you a lot on utility bills. A shaded roof is not going to make use of solar energy panels.Any company that says otherwise is questionable. Don’t deal with salespeople who pressure solar energy system sales people. If you get in an awkward scenario, ask for another sales representative. Most reputable solar energy companies do not use this tactic. If you have spoken to the manager and are still feeling pressure, it is time to look into another company. You can compare from website to website. Anyone who is considering building a home is the perfect candidate for using solar energy. South-facing windows provide the most sun during the winter. This can help you block summer sun to reduce cooling costs. Make certain that all areas you are installing solar panels in the shade. Solar energy is a concept that many people are interested in. However, a majority of the population still has a lot to learn about it. This article has given you a solid foundation to begin your own exploration into solar power.
In many ways, it's a sad story: The groundwater a Wyoming couple relies on to sustain their little farm suddenly turns foul. So Louis Meeks embarks on a six-year crusade to discover how it happened, suspecting that nearby natural gas wells are somehow involved. He battles corporations and governments and alienates many of his neighbors, yet today his water is still contaminated. There's no happy ending, no justice in sight. But Meeks and other gas-patch crusaders have accomplished something important: They've drawn attention to the industry's sometimes sloppy practices, particularly when it comes to hydraulic fracturing, or "fracking." When a gas or oil well is fracked, chemicals and water are injected deep underground to fracture rock formations and release gas and oil. Nowadays, it's an essential part of the process. The industry insists that fracking is safe. But some of the chemicals used in the process are carcinogenic, and the industry has fought to keep the exact ingredients secret. And though fracking is used in many thousands of gas and oil wells from the Southwest to New York state, there's never been a comprehensive, on-the-ground scientific study of its possible impacts on drinking water. That is now changing, thanks to people like Meeks and determined staffers within the U.S. Environmental Protection Agency. The EPA is finally launching the first real study of the risks posed by fracking; it plans to investigate the whole "life cycle" of the process and "the potential adverse impact ... on water quality and public health." Meanwhile, some companies have begun providing information about the chemicals they use, if only to reduce the possibility that stiff regulations will be imposed. So the saga of Louis Meeks is not just sad; it also offers hope. Determined citizens can make a difference. And the story offers hope in another arena -- the future of journalism. It was written by Abrahm Lustgarten, a reporter for ProPublica, the most substantial of the new nonprofit, online-only news operations. ProPublica was formed only three years ago and already it has won two Pulitzer Prizes -- the first Pulitzers ever awarded for online journalism. ProPublica specializes in investigative journalism, digging into topics ranging from medical care to Wall Street shenanigans. Based in New York City, supported by foundation grants and donations, it has more than 15 staff reporters and six editors who are determined to do "stories that make a difference, stories with moral force." Sound familiar? High Country News has a similar mission, centered on the American West. At 41 years old, HCN might even be the oldest nonprofit news operation. Supported mainly by our subscribers but also by some grants and advertising, our magazine reaches roughly 60,000 readers. Our website, hcn.org, is seen by hundreds of thousands more. In an era when journalism is undergoing wrenching changes, HCN is eager to work with the new generation of online-only operations. We are proud to print the story of Louis Meeks.
This post is sponsored by UPS. "We're going green" has such a nice ring to it, doesn't it? From environmental responsibility to consumer safety to good old-fashioned warm feelings, greening one's business practices has become both a popular choice and an easy claim. But being a truly sustainable business is about more than participating in the community's recycling program and slipping the word "green" into all marketing materials. Like most things worth doing, going green is a three-dimensional effort that can affect internal operations, relationships with vendors, and financial commitments. Here are a few ways a business can deepen and widen its approach to sustainability. 1. Reduce, Reuse, Recycle The three R's begin with "reduce," but it's often an underappreciated step in the process. After all, the more a business reduces its use of materials in the first place, the lower the effort required to reuse and recycle becomes. The National Federation of Independent Businesses suggests businesses begin with a waste audit to determine just how much waste is being created and where it could potentially be diverted. The organization also suggests better managing inventory to reduce the number of expired or unsellable products, avoiding purchasing products with unnecessary packaging, and engaging both employees and customers in the effort to reduce the use of unnecessary materials. Of course, businesses must use a certain number of products, packing, and materials, and thinking creatively to recycle, reuse, and even donate those items will strengthen a company's commitment to sustainability. Even better, when businesses tell customers how to manage their products sustainably, they show a firm commitment to the life cycle of their products. 2. Utilize Green Vendors Because no business is an island, being truly green means evaluating not only a company's own sustainability practices, but also those of its vendors. "Much of your business’ environmental footprint may seem beyond your control," wrote CEO of the Green Business Bureau Marcos Cordero for Intuit. "Vendors and other partners may use unsustainable materials, packaging, and processing techniques, but their manufacturing decisions still ultimately affect your company’s environmental impact." Cordero suggests that businesses make the emphasis on sustainability clear to their vendors, and set written guidelines outlining their green expectations. Looking for certification from outside organizations, such as the Green Business Bureau, can help make the evaluation process easier. 3. Participate In A Carbon Offset Program Simply by existing, all businesses and their vendors have a carbon footprint, no matter how committed they are to sustainability. While a company can't reduce itself out of business, purchasing carbon offsets can shrink its environmental impact. Cornell University economist Robert H. Frank called carbon offsets "an excellent idea" in The New York Times. "If our goal is to reduce carbon emissions as efficiently as possible," he wrote, "offsets make perfect economic sense." True sustainability may be a comprehensive and multi-dimensional pursuit, but in a marketplace crowded with "green business" claims, being able to back those claims up with smart and effective practices isn't just good for the world, it's good for business. -Written by Natalie Burg Find out more about Sponsor Posts.
What is Petroleum Engineering? If you're interested in earth sciences - and you like the idea of getting paid to travel the world - consider becoming a petroleum engineer. Petroleum engineers seek out oil and gas reservoirs beneath the earth's surface. They develop the safest and most efficient methods of bringing those resources to the surface. And as demands increase for alternative energy, some forward-thinking petroleum engineers are turning their talents to working on clean energy products that produce fewer harmful carbon emissions. Many petroleum engineers travel the world or live in foreign countries -- wherever their explorations take them to find and recover valuable reserves. These travels can lead to the deserts, high seas, mountains, and frigid regions of the world in order to find untapped sources of energy for the world's population. The work of petroleum engineers keeps the world running. They help provide the energy to heat our homes, cook our food, and fuel our cars. However, petroleum engineers study more than just combustible material. Manufacturers use petroleum to create more than three hundred everyday products from medicines and cosmetics to plastics and textiles. Earning an on-campus or online college degree in petroleum engineering does not mean you must earn a living in another country. Plenty of other jobs exist in the profession at home, as well as abroad. Petroleum engineers might oversee drilling sites or work indoors in a laboratory or at a computer. A wide range of career possibilities exists within the profession. - Skip to What Do Petroleum Engineers Do? - Skip to Career Education in Petroleum Engineering - Skip to Is an Advanced Degree Necessary to Be a Petroleum Engineer? - Skip to What Can You Do With a Major in Petroleum Engineering? - Skip to Career Specialties for Petroleum Engineers - Skip to Certification and Licensure - Skip to Petroleum Engineering Degree Programs What Do Petroleum Engineers Do? After locating reservoirs of crude oil and natural gas, petroleum engineers find ways to bring those substances out of the ground for processing. The two primary ways of getting the reserves to the surface are "drilling" and "producing." Drilling creates a tunnel down to the oil and involves creating a system of pipes and valves to bring it up. When producing, petroleum engineers locate reserves that are already under pressure. If they don't erupt on their own, the engineers use their talents to coax the substances above ground. The petroleum engineer is involved in nearly all phases of the production process, from finding the oil through refining and distributing it. Using skills that are often associated with the earth sciences, petroleum engineers examine a variety of geologic and engineering data to determine the most likely sources of oil. Because many of these locations are in out-of-the-way places, professionals involved in this aspect of petroleum engineering often have to travel extensively, or set up residency in a foreign country for a time. Once a reserve has been located, the petroleum engineer must determine the quantity and quality of the product to be extracted. Will there be enough of sufficient quality to make the substantial investment in money and labor worth the effort? Even after a company has decided to drill, the petroleum engineer must determine the best and most efficient means of extracting it. Petroleum engineers examine the recovered oil and gas for quality before separating the different elements. They often find a mixture of oil, gas, water, and other components that must be separated and refined. Petroleum engineers oversee this process. They also design and develop the physical plants necessary for carrying it out these tasks safely and efficiently. Aside from everyday gasoline, petroleum is also used in jet fuel, diesel fuel, kerosene, propane, and heating oil for homes. Some electricity-generating plants are even fueled by natural gas. Plastic food wraps, car ties, household containers, toys, and other plastics are made from petroleum byproducts. The fibers used in some clothing are also developed from petrochemicals. Career Education in Petroleum Engineering Most petroleum engineering degrees exist at the master's level, so a science and/or engineering degree is recommended for anyone interested in pursuing a career in petroleum engineering. Your undergraduate curriculum should emphasize math, chemistry, and physics. In addition, classes in language, composition, and economics are recommended. Always remember that you will be working as part of a team, writing reports, and drafting proposals. Therefore, the ability to communicate effectively is an important asset to develop in college. Petroleum engineering students take basic engineering courses before moving into more specialized classes like geology, well drilling, reservoir fluids, fluid flow, petroleum production, and reservoir analysis. If you choose to earn a BS in petroleum engineering, you might be assigned to an office position for orientation before being sent out for field experience. Some of these entry-level experiences include well-work operations, facilities production, surveillance activities, or even drilling. Anyone who considers a career in petroleum engineering should be prepared for continual learning. While many classroom-based engineering principles remain the same, technology and methods are always shifting, and the increasing problem of global climate change is inescapably intruding upon the profession, forcing industries to adapt. Professional organizations such as the Society of Petroleum Engineers offer short courses to update skills and to continue your professional development. Computers play an increasingly important role in this industry. Petroleum engineers should graduate with solid computer skills, and they should stay abreast of software and hardware changes in their field. Petroleum companies own many of the supercomputers currently in use around the world. Personal computers are used for such operations as analyzing data collected during fieldwork and automating oilfield production. Experienced petroleum engineers can choose to live almost anywhere in the world. Consider the location of the companies with whom you would like to work, where they have headquarters, and where they have oil fields. Many petroleum engineers can be found in California, Texas, Alaska, Louisiana, and Oklahoma. Many top graduates receive several offers, so consider your own preferences and the opportunities presented by each company. Is an Advanced Degree Necessary to Be a Petroleum Engineer? Although most degree programs specific to petroleum engineering exist at the master's level, petroleum engineering can be quite rewarding even without an advanced degree. A bachelor's degree in petroleum engineering is the most valuable bachelor's degree one can have, a Georgetown University survey found. The poll, based on 2009 U.S. Census data, found that people with only a bachelor's degree earned a median annual salary of $120,000 -- the highest of any collegiate major. Many new engineers advance rapidly through their companies as they gain on-the-job experience. A graduate with a bachelor's degree can expect to move into a challenging assignment quickly, but petroleum engineers usually seek a master's degree to qualify for positions in technical or managerial areas. Online master's degree programs have become popular among working professionals (especially those in remote locations) who have already mastered hands-on skills and are looking to improve their career prospects. A PhD, from an online or on-campus program, is usually the ticket to a research and/or teaching career for a petroleum engineer with solid professional credentials. Matching School Ads Learn more about petroleum engineering degree programs offered at DeVry University. What Can You Do With a Major in Petroleum Engineering? You might have seen old movies with oil-well gushers splattering the drilling crew, spewing precious barrels of oil on the ground. Though it makes for a great image, the petroleum engineer must assure their employer that this scenario will never happen. Gushers do not surprise drilling crews anymore. Petroleum engineers, using very precise and sophisticated equipment, have foretold where the oil is and how deep the source is. There are four areas of concern to petroleum engineers: - Finding the oil - Evaluating its potential - Maximizing its recovery - Transportation & storage These are performed by three broad categories of engineers: - The drilling engineer - The production engineer - The reservoir engineer Petroleum engineering consists of many different specialties. It can involve working with contractors to: - Design and oversee multi-million dollar drilling operations - Run experiments to improve oil and gas production - Create computer-simulated models to determine the best recovery process Petroleum engineers can specialize in environmental safety regulations, or they can move into other areas such as entrepreneurship and consulting. Another developing opportunity is in sales engineering. This involves the service and testing functions for various types of equipment in the industry. Career Specialties for Petroleum Engineers: - Geologists look for crude oil and natural gas by studying rock formations and cuttings from drilling sites. They can analyze data from geological surveys, field maps, and seismic studies to help identify reservoirs. - Geophysicists study the earth's external and internal composition. They examine ground and surface waters, atmosphere, and magnetic and gravitational fields. They combine the principles of mathematics, physics, and chemistry along with three-dimensional computer modeling to locate oil and gas reserves. - Before a well can be drilled in the United States, the drilling company must obtain the rights from the landowner. This responsibility falls to the petroleum landman, who must obtain the government permits and negotiate the rights from ranchers, farmers, or other landowners. The job combines legal knowledge with communication, research, and negotiation skills. - A drilling operation can cost millions of dollars. Therefore, it is necessary to determine the best and most economical plan for drilling. The drilling engineer works with the drilling contractors to confirm the location and design a procedure to accomplish their task. - Before, during, and after a drilling project, the well-log analyst is responsible for obtaining core samples and analyzing them for potential. Analysts must use sophisticated equipment, such as electronic, nuclear, and acoustic tools. They rely on their talents to interpret the data from these systems into meaningful recommendations. - Once a well has been drilled, the production engineer must determine the best way to bring the petroleum to the surface. - To achieve as much profit as possible from a well, companies need to bring as much oil to the surface as possible. The reservoir engineer, often working in conjunction with the production engineer, examines the fluid and pressure distributions throughout the reservoir to achieve maximum results. - Facility engineers separate, process, and transport the oil and natural gas after it has been brought to the surface. They also design and build pipelines to move the petroleum from the drill site all the way to the point of sale. - The safety engineer is responsible for ensuring the safety of the people who work around the oil and natural gas. S/he keeps track of safety regulations and design plans to make certain those guidelines are met and documented. - Environmental/Regulatory specialists might come from a variety of areas, but can include petroleum engineers. Working with a team of experts, they make sure all environmental regulations are met. - Chemical engineers might be involved in anything from designing a plant for processing oil to researching new products or improving current production. - Petroleum accountants are charged with placing a value on the oil and gas that might be produced in the future, thereby establishing corporate assets. - The energy economist must analyze business conditions and develop financial strategies that are critical to a company's success. An understanding of finances and the petroleum industry is vital. Several other careers can blossom out of a petroleum engineering degree. Petroleum engineers who have obtained a certain level of competence and respect in the industry can move on to consulting for several companies instead of working for just one. Some professionals might also decide to develop their own companies or obtain an advanced degree to move into an academic career. Certification and Licensure In an effort to promote the industry and protect the public welfare, the Society of Petroleum Engineers has been heavily involved in establishing standards for minimum competency requirements. Engineers who are at different career levels can use the standards established by the SPE to guide their development.
Oil reserves in Ghana Petroleum and natural gas production The 100% state-owned filling station company of Ghana, Ghana Oil Company (GOIL) is the number 1 petroleum and gas filling station of Ghana; and commercial quantities of offshore oil reserves in Ghana were discovered in the 1970s. In 1983 the government established the 100% state-owned state oil company Ghana National Petroleum Corporation (GNPC) to promote hydrocarbon exploration and production of Ghana's entire petroleum and natural gas reserves. These GNPC prospected in ten offshore blocks between Ada along the eastern international border of Ghana and in the Tano River Basin and in the Keta Basin. In 1989, CN¥184 million or GH₵64.9 million (US$30 million) was spent drilling wells in the Tano basin, and on 21 June 1992, an offshore Tano basin well produced about 6,900 barrels (1,100 m3) of crude oil daily. In the early 1990s, GNPC reviewed all earlier crude oil and natural gas discoveries to determine whether a predominantly local operation might make exploitation more commercially viable. GNPC wanted to set up a floating system for production, storage, off-loading, processing, and gas-turbine electricity generation, hoping to produce 22 billion cubic feet (620,000,000 m3) per day, from which 135 megawatts of power could be generated and fed into the national and regional grid. GNPC also signed a contract in 1992 with Angola's state oil company, Sonangol Group, that provides for drilling and, ultimately, production at two of Sonangol's offshore oilfields. GNPC was paid with a share of the crude oil. The Tema Oil Refinery in Ghana underwent the first phase of a major rehabilitation in 1989. The second phase began in April 1990 at an estimated cost of CN¥220.8 million or GH₵77.8 million (US$36 million). Once rehabilitation was completed, distribution of liquified petroleum gas was to be improved, and the quantity supplied was to rise from 28,000 to 34,000 barrels per day. Construction on the new Tema-Akosombo oil products pipeline, designed to improve the distribution system further, began in January 1992. The pipeline was to carry refined products from Tema to Akosombo Port, where they will be transported across Lake Volta to northern regions. Distribution continued to be uneven, however. Other measures to improve the situation included a CN¥171.7 million or GH₵60.5 million (US$28 million) project to set up a national network of storage depots in all regions. The Tema Lube Oil Company commissioned its new oil blending plant, designed to produce 25,000 tons of oil per year, in 1992. The plant was to satisfy both North Ghana and Ghana's requirements for motor and gear lubricants and 60% of the country's need for industrial lubricants, or, in all, 90% of Ghana's demand for lubricant products. Shareholders per equity included state-owned Ghana National Petroleum Corporation, and the 100% state-owned national insurance company, Social Security and National Insurance Trust (SSNIT). Ghana's Jubilee Oilfield which contains up to 3 billion barrels (480,000,000 m3) of sweet crude oil was discovered in 2007, among the many other oilfields in Ghana. Oil and gas exploration in Ghana is ongoing, and the amount of both crude oil and natural gas continues to increase. The expected annual tremendous inflow of capital from crude oil and natural gas production into the Ghana economy began from the first quarter of 2011 when Ghana started producing crude oil and natural gas in commercial quantities in 2011. At the end of 2012, declining productivity at one of the country’s largest oil projects, the Jubilee oil field, led to a decline in revenues for the government, who had budgeted for oil revenue of more than $650 million. The corresponding shortfall was more than $410 million. The oil firm blamed the decline on “sand contamination of the flowlines that carry the oil from the underwater wells” into the storage facility on the surface. In the first and second financial quarters of 2013, Ghana produced 115,000-200,000 barrels of crude oil per day and 140 million-200 million cubic feet of natural gas per day. The 100% Iranian state-owned oil companies National Iranian Oil Company and Iranian Offshore Oil Company, and Singapore Petroleum Company with Vetro Energy and PetroSeraya of Singapore have declared interests to provide assistance in construction of offshore platforms and drilling rigs for Ghana's state-owned oil company, Ghana National Petroleum Company on rapidly developing Ghana's oil and gas infrastructure and industry as Ghana aims to further increase output of oil to 2 million barrels per day and gas to 1.2 billion cubic feet per day with an expected annual generating revenue of GH₵140.7 billion (US$65 billion) in 2014. Ghana is believed to have up to 5 billion barrels (790,000,000 m3) to 7 billion barrels (1.1×109 m3) of petroleum in reserves, which is the sixth largest in Africa and the 25th largest proven reserves in the world and Ghana has up to 6 trillion cubic feet of natural gas in reserves. Ghana's experience with the oil and gas industry has been more complex than is often assumed. Recent research shows that the challenges and prospects of the oil and gas industry go beyond the often discussed issues about macroeconomic stability to pressing current concerns about energy. This research shows the possibility that the rents from oil and gas can be used for social change in Ghana. - Clark, Nancy L. "Petroleum Exploration". A Country Study: Ghana (La Verle Berry, editor). Library of Congress Federal Research Division (November 1994). This article incorporates text from this source, which is in the public domain. Lcweb2.loc.gov. - "Ghana leader: Oil reserves at 3B barrels – Yahoo! News". Web.archive.org. 22 December 2007. Archived from the original on 26 December 2007. Retrieved 21 December 2010. - "Kosmos Makes Second Oil Discovery Offshore Ghana". Rigzone.com. 25 February 2008. Retrieved 26 June 2010. - McLure, Jason. Ghana Oil Reserves to Be 5 billion barrels (790,000,000 m3) in 5 years as fields develop. Bloomberg Television. Wednesday, 1 December 2010. - "Ghana's Jubilee oil field nears output plateau -operator". reuters.com. Reuters. 23 April 2013. Retrieved 18 July 2013. - Tullow Oil’s projections cause budgetary worries in Africa, Liberia: GNN Liberia, 2013 - "Singapore Opens Investment Office In Ghana". ventures-africa.com. 23 July 2013. Retrieved 25 July 2013. - "Iran pledges assistance to Ghana’s oil and gas sector". graphic.com.gh. Daily Graphic. 9 January 2013. Retrieved 25 July 2013. - "Atuabo gas project to propel more growth". Daily Graphic. graphic.com.gh. 13 May 2013. Archived from the original on 16 May 2014. Retrieved 27 October 2013. - Obeng-Odoom F, 2014, Oiling the Urban Economy: Land, Labour, Capital, and the State in Sekondi-Takoradi, Ghana, Routledge, London - Obeng-Odoom F, 2015, 'Oil rents, policy, and social development: Lessons from the Ghana controversy', United Nations Research Institute for Social Development (UNRISD) Research Paper, no. 2, May.
In a new interdisciplinary project funded by the Alfred P. Sloan Foundation, Johanna Mathieu, Assistant Professor of EECS, and Catherine Hausman, Assistant Professor of Public Policy, will evaluate the past and potential future impact of batteries on the nation’s energy system. This research has the potential to help guide future energy policies and investment in battery storage, especially in the area of renewable energy. “We’re estimating how increasing the number of batteries on the system may change the mix of energy generation in the future,” Prof. Mathieu said. “The goal is to determine how to change policy so that battery storage achieves both our environmental and economic goals.” Battery storage can provide several services to the power grid. In particular, battery storage is a promising method for integrating renewable energy into the grid. Since the energy generated by renewable sources – such as solar or wind power – is variable, it can be difficult to manage. Battery storage helps mitigate this variability, but batteries are still expensive, so they have not been widely deployed. As such, we have very little real-world evidence for how using batteries for grid services impacts grid operation and the environment. In addition, despite its possible benefits to the environment, the market may not offer enough incentive to increase the amount of battery storage on the grid, even in deregulated markets. The goal is to determine how to change policy so that battery storage achieves both our environmental and economic goals.Prof. Johanna Mathieu “Solar and wind have both gotten a lot cheaper, but it really depends on the location and the time of day,” Prof. Hausman said. “There are definitely parts of the country where renewable energy is cheapest, but this is a really dynamic industry and policy area. To craft appropriate policies, more analysis is really important.” Prof. Hausman will analyze data from the largest electricity market in the Northeast, called PJM Interconnection. PJM has had a significant growth in battery storage, and Prof. Hausman will use statistical tools and data-driven approaches to examine the effects this growth has had on the economics of the electrical grid and other generating units. She will specifically examine how past policies and technologies affected the energy infrastructure. “If you want to think about how your new energy invention or your new algorithm or your new idea is going to play out, you’ve got to think about the business side from the utility’s perspective,” Prof. Hausman said. “You’ve got to consider how they’re going to respond in a regulated environment.” While Prof. Hausman examines the economic effects, Prof. Mathieu will use the data from PJM to build and validate grid models and use those models to explore the impact of battery storage on the grid and the environment. Specifically, she will examine how increased battery storage affected PJM’s ability to integrate renewable energy, and use this data to predict how future grids may operate. She will also work with Prof. Hausmann to design new storage policies and explore how grid operation and environmental impacts would change under those policies. The research would be of particular interest to electricity grid operators, who are thinking about how to increase battery storage on their systems.
April 2nd, 2013 New technology for releasing gas bubbles trapped in shale rock has created a bonanza in several states. Are we entering a golden age that restores American manufacturing competitiveness and reduces greenhouse gas emissions, or are we creating environmental and health problems with a fuel that’s neither cleaner nor cheaper than what it’s replacing? Mark Zoback, professor at Stanford University School of Earth Sciences, believes that hydraulic fracturing can serve as a bridge to a renewable energy future, but only if it’s done responsibly. “Right now there are about 20K wells a year that are drilled horizontally and then hydraulically fractured. Each well has between about 5 and 15 hydraulic fractures on average, so there’s roughly 200K hydraulic fractures carried out every year.” He sees the market impact as “remarkable” at the local and state level, creating both jobs and tax revenues. On the national level, he said, “people are talking about a manufacturing renaissance in the Midwest.” He went on to say that American consumers are paying one-third for natural gas of what they were paying before the large-scale production of shale gas, and that CO2 emissions from coal are down 20% in the last few years. “All of the other pollution and health problems associated with coal are also diminishing thanks to the increased use of natural gas. So there are many positive benefits. But there are also environmental impacts.” One of the problems is that states are responsible for regulating fracking; some are doing it better than others. Addressing the environmental impacts, Kassie Siegel, senior counsel at Climate Law Institute Center for Biological Diversity, agreed that the fracking boom has transformed the economy, but at an unacceptable price. “The fact is that fracking poisons our air and our water, and it brings terribly intense industrial development to previously peaceful communities.” She went on to say, “The nature of shale development is that you have to drill many wells, and to keep up production you have to keep drilling more wells. It’s conventional development on steroids.” She also said that it is not the climate-friendly fuel it’s promoted to be. “It’s not a bridge to a clean energy future. It’s actually a bridge to extreme climate disruption.” She believes we should ban fracking. TJ Glauthier, former Deputy U.S. Secretary of Energy and former board member of Union Drilling, said that natural gas has an important role to play in a cleaner economy and a cleaner future, but it has to be well regulated. “We need to regulate each stage of what we’re doing—the actual drilling operations, the fluids being used for fracking, the production process, and go right through each of these areas. I think it’s possible to do it in a way that’s responsible and safe and will help us move ahead to an appropriate future.” Siegel said that the solutions have been around for years, but we haven’t adopted them. “So I don’t think you can use the fact that we could clean up oil and gas as an excuse from getting policy changes that we really need to drive us off of fossil fuels.” She referred to a current draft rulemaking by the Bureau of Land Management, which adopts “almost none” of the recommendations of the Shale Gas Subcommittee, on which Zoback served. Siegel said, “If an oil and gas company claims that the chemicals used in fracking are a trade secret, they don’t even have to give us the information. We know how to clean it up and we’ve known how to clean it up for years, but it hasn’t happened.” According to Zoback, natural gas doesn’t make any sense if we’re not going to be decarbonizing the energy system further. “We have to have incentives in place, programs in place so that we can transition in an economically viable and socially acceptable way from fossil fuels to renewables. And we all should see that roadmap. And we should know where we are on that road.”
A manufacturing industry The New Zealand flax industry was saved in the 1930s by the growth of manufacturing. There was a deliberate switch from exporting fibre to processing it for local use. Flax mills in the Manawatū region began to supply the Foxton factory of New Zealand Woolpack and Textiles Ltd, which made flax fibre into woolpacks for farmers. More factories opened, and other goods were made, including underfelt, floor coverings, upholstery materials and binder twine. During the Second World War, the government promoted the growing of linen flax (a different species from New Zealand flax) in Marlborough, Canterbury, Otago and Southland. Linen cloth was urgently needed by Britain for aircraft construction and other uses. With the German invasion of the Netherlands and Belgium, the usual sources of supply were lost, so allies like New Zealand were asked to help. But when the war ended, this new fibre industry faded away. The manufacturing industry managed to survive for the next 50 years mainly because of government support. In 1936 the government restricted imports of woolpacks made from Indian jute. In 1939 it bought the Moutoa Swamp near Shannon as an experimental flax plantation. During the Second World War it helped the flax industry because it was supplying farmers and the military. After the war, government subsidies and import restrictions on fibres from overseas kept the industry going. The removal of government protection in the 1970s and competition from synthetic fibres hastened the end. The last flax manufacturing plant closed in 1985. In the 2000s the trend towards using natural products made from renewable resources sparked fresh interest in flax. For many years flax was used to make high-quality paper. It is now the basis of craft and florists’ products. The soothing gel from the base of New Zealand flax leaves is used in skincare products, such as those produced by the New Zealand company Living Nature. There is still scope to exploit the medicinal and nutritional properties of the plant. The oil from linen flax seed is known to help some health problems. But as yet, New Zealand flax seed oil has not been manufactured, although it contains linoleic acid – vital for human nutrition. Researching new uses In the 2000s scientists began to explore different uses for flax. The Biomaterials Engineering unit at Scion, Rotorua, is investigating ways to improve the strength and performance of flax fibre by combining it with other natural fibres such as hemp and wood, and synthetics such as glass fibre. Results are encouraging, and the material also looks attractive. Future uses include building materials, furniture and packaging. Another project, started by Industrial Research and textile conservator Rangi Te Kanawa, looked at ways of softening flax fibre so when woven it could be made fine enough for fashion clothing. In 2006 a company, Muka Ltd, was seeking a patent for a mechanical stripping device, in order to start production. Work by AgResearch (a Crown research institute) found that the leaf material left over from flax stripping makes a wholesome stock food. Another AgResearch project examined how flax planted along waterways can absorb nitrogen, reducing problems caused by liquid waste runoff. In 2003 the Sustainable Farming Fund began an overview project to bring together research on the environmental and commercial benefits of flax and promote wider use of this natural resource. A plant with a rich history, New Zealand flax clearly has a promising future.
The UK renewable energy sector hit a new record in the year’s second quarter, generating 25.3% of the country’s electricity and beating out coal for the first time. The UK’s Department of Energy and Climate Change published figures this week (PDF) highlighting the energy mix for the second quarter of 2015, covering April to June. According to the DECC, renewables generated 25.3% of UK’s electricity in the second quarter, with 42% of that figure coming from onshore and offshore wind, meaning that wind generated 10.7% of the UK’s electricity needs. More significant was the place of renewables compared to other types of energy generation. Gas accounted for 30.2% of all electricity generated in the second quarter, while nuclear generated 21.5% and coal only accounted for 20.5%, putting renewables at a healthy second position overall. Renewable energy’s share of electricity generation from 16.7% in the second quarter of 2014 to 25.3% in Q2’15, totalling 19.9 TWh, an increase of 51.4% over a year earlier. “Renewables have now become Britain’s second largest source of electricity, generating more than a quarter of our needs,” crowed RenewableUK’s Chief Executive Maria McCaffery. “The new statistics show that Britain is relying increasingly on dependable renewable sources to keep the country powered up, with onshore and offshore wind playing the leading roles in our clean energy mix. “As the transition to clean electricity continues apace, we’d welcome clearer signals from Government that it’s backing the installation of vital new projects. So far, we’ve had a series of disappointing announcements from Ministers since May which unfortunately betray a lack of positive ambition at the heart of Government. If Ministers want to see good statistics like we’ve had today continuing into the years ahead, they have to knuckle down, listen to the high level of public support we enjoy, and start making positive announcements on wind, wave and tidal energy.”
Bioplastics will grow at a significant pace over the next 5 years. The total worldwide use of bioplastics is valued at 571,712 metric tons in 2010. This usage is expected to grow at a 41.4% compound annual growth rate (CAGR) from 2010 through 2015, to reach 3,230,660 metric tons in 2015. - By 2010, ready access to crops such as soybeans, corn, and sugarcane moved the United States strongly into bioplastics. North American usage is estimated at 258,180 metric tons in 2010 and is expected to increase at a 41.4% compound annual growth rate (CAGR) to reach 1,459,040 metric tons in 2015. - Use of bioplastics got off to a faster start in Europe than in the United States. European usage is now reported at 175,320 metric tons in 2010 and is expected to increase at a 33.9% compound annual growth rate (CAGR) to reach 753,760 metric tons in 2015. Market forces, especially increasing focus on environmental threats such as global warming and disposal of products containing toxic materials, have strongly driven development and early use of bioplastics. Bioplastics are plastics that are made from renewable resources, such as food crops or biomass. The terms bioplastics and biodegradable plastics have been used interchangeably, but there is a difference between the two types of polymers. This report defines a fully biodegradable polymer as a polymer that is completely converted by microorganisms to carbon dioxide, water, and humus. In the case of anaerobic biodegradation, carbon dioxide, methane, and humus are the degradation products. Some, but not all, bioplastics are also biodegradable. Study goals and objectives Goals and objectives of this study include: - Identifying trends that will affect use of bioplastics and their major end-use application markets - Reviewing, analyzing, and forecasting specific end markets for bioplastics by material types, with sections devoted to each type of renewable-sourced plastic - Analyzing and forecasting market developments from the viewpoint of major applications for bioplastics, that is, packaging, automotive, electrical/electronic, medical, building, and construction and others - Profiling many of the most important suppliers of bioplastics, including resin roducers and compounders Reasons for doing the Study The rapid emergence of bioplastics is one of the major materials stories of the period starting in 2010. Once billed as biodegradable plastics, the theme for renewably sourced plastics has shifted dramatically in recent years to sustainability. In order to maximize market impact, there is now a growing trend to compound bio-based plastics with oil-based plastics to extend their reach into markets for durable products used in cars, cell phones, and elsewhere. The focus has shifted to total carbon footprint, and away from contribution to the solid waste stream. Due to the growing concern about climate change and negative health impacts of many existing materials, this report will be of interest to anyone who sells, designs, or manufactures products that are, or could be, made from polymeric materials. This report will also be of value to individuals who are helping to establish public policy about issues ranging from limits on use of plastics packaging to potential limits on use of vinyl compounds in medical applications. This report will be of value to technical and business personnel in the following areas, among others: - Personnel in end-user companies in a wide range of industries from retail bags to solar cell manufacturing - Marketing and management personnel in companies that produce, market, and sell any type of plastics - Companies involved in the design and construction of process plants that manufacture resins and products made from the resins - Companies that supply, or want to supply, equipment and services to plastics companies - Financial institutions that supply money for such facilities and systems, including banks, merchant bankers, venture capitalists, and others - Investors in both equity and fixed-income markets; the fate of the plastics very much weighs on the values of the publicly traded stocks of companies such as Eastman, Bayer, DSM, and DuPont - Personnel in government at many levels, ranging from federal to state and local authorities, many of whom are involved in trying to ensure public health and safety; the report also will be of interest to military scientists studying new packaging and equipment. Scope of Report The focus of this report is plastics that are made from renewable resources such as biomass or food crops. There is even some potential development of bioplastics from animal resources. Plastics that may be potentially made from waste carbon dioxide are reviewed because of their potential impact on bioplastics, but their data is not included in the forecasts presented here. Bioplastics are further defined here as polymer materials that are produced by synthesizing, either chemically or biologically, materials which contain renewable organic materials. Natural organic materials that are not chemically modified, such as wood composites, are excluded. The report includes use of renewable resources to create monomers that replace petroleum-based monomers, such as polyester and polyethylene that use feedstocks made from sugarcane. Ethanol, a major product in Brazil, is one small chemical step from ethylene. The focal point is on the following resin chemistries, including: - Polylactic acid - Thermoplastic starch - Bio-polyamides (nylons) - Polyhydroxyalkanoates (PHA) - Bio- polytrimethylene terephthalate (PTT) - Bio-bottle-grade polyethylene terephthalate (PET) Biodegradable and photodegradable polymers made from petrochemical feedstocks are not included. Other renewable resin chemistries are also covered but in less detail because their roles are not as well developed. They include collagen and chitosan. Click for report details: www.companiesandmarkets.com. Companiesandmarkets.com is a leading online business information aggregator with over 500,000 market reports and company profiles available to our clients. Our extensive range of reports are sourced from the leading publishers of business information and provide clients with the widest range of information available. In terms of company profiles, Companiesandmarkets.com’s online database allows clients access to market and corporate information to over 100,000 different companies. We provide clients with a fully indexed database of information where clients can find specific market reports on their niche industry sectors of interest. Companiesandmarkets.com is focused on providing clients with exemplary customer service and a flexible approach to accessing business information. Our team have extensive expertise in the market research industry and are keen to provide clients advice on their research requirements and possible alternative sources of information; a model which provides clients a value for money solution to research.
Forty-five years ago it was my first job; a skinny 17-year-old kid working on a 50-km stretch of dirt, soon to be Highway 103 East from Hubbards to Halifax. A simple job, too: Place a small stone on a stake at the side of the alignment to indicate a fill, a small stone and a leaf to indicate a cut. All day I’m doing that, with the foreman periodically measuring the cross slope of each graded section with a crown board, slashing an “X” in the dirt where a deeper cut was required or an “O” for more fill. Then we’d start all over again, cutting and filling until the dirt surface was as finished as we and the grader operator could make it. Prehistoric by today’s standards? You bet. In fact, these days grade checkers, surveyors and equipment operators are finding their jobs a whole lot easier not because of what’s there in the dirt at their feet but more than 20,000 km overhead: 24 satellites, each positioned so that no fewer than four transmit signals at any given moment to help us triangulate the length, width and elevation of our positions on the ground. This simple principle has been revolutionizing the way many contractors build roads and highways since the late 1990s. Others are less convinced—in part because of acknowledged costs, but also because of misunderstanding about what GPS can and cannot do. More than a game of inches All projects begin with a job plan, of course, but increasingly contractors are relying on 3D digital job plans loaded into a machine’s GPS control box and remotely connected to a nearby base station where satellite signals are closely calibrated to indicate the position of the machine on the ground and the grading tolerances it must work to. Instead of relying on stakes positioned every 25, 50 or 100 feet, 3D machine control “checks grade everywhere at every position and every location along the job site. So you’re not only accurate at every grade stake but every inch between all those grade stakes, too,” says Chris Mazur, North American product marketing manager for Leica’s machine control division. Working in tandem with the machine’s hydraulics and the angle sensors mounted on the bucket or blade, GPS increases productivity and accuracy, and reduces the number of passes the machine has to make to come to grade. “If I can get the grade in three passes instead of six or seven I’m going to burn a lot less fuel and save money, and I’m going to put a lot less wear and tear on my machines,” says Topcon Positioning System’s product manager Tony Vanneman. It also means less wear and tear on machine operators, and a significant reduction in the amount of engineering and staking required on job sites. “You still have benchmarks on any project, but you can literally get by with 90 per cent reduction in staking because now all that information is in the operators machine or in the field controller of the grade setter.” Trimble Navigation’s sales engineering manager Lamar Hester says GPS also helps dozer and grader operators work faster, particularly when blue topping. “They can run at a constant speed, not having to slow down, cutting it to fine grade…so they do pick up their speed when they’re operating with machine control.” Every bit as important as the hardware, software development gauges and improves work flow from the standpoint of those administering the job back at the office. Trimble’s “connected site strategy” streamlines work flows by connecting the machine in the field via the internet to the office where staff track the progress of the job, and make fresh decisions about how to proceed next. Then if they have design updates they can flow that information from the office straight out to the machine as well,” says Hester. Similarly, Topcon’s SiteLink is a site management system that automatically updates work flow, up time, volumes moved etc., which can be shared with supervisors, foremen and operators along with site plan revisions that can be immediately implemented. Fleet diagnostics and management, fuel consumption, idle time, productivity and maintenance are determined down to the level of each machine regardless of the manufacturer. A case study Proponents call it the most significant single-highway investment made in Ontario history. By the time the Rt. Hon. Herb Gray Parkway is finished in 2014, trucks travelling Highway 401 from as far away as Montreal will no longer have to wind their way through the city streets and neighbourhoods of Windsor before crossing over to the other side of the Detroit River. A key player in the $1.5-billion parkway project is Ontario-based builder Amico Infrastructures Inc., which will construct approaches to both the brand new international bridge and inspection plaza. The main challenges, says Amico survey manager Scott Rahm, are cuts of up to 12 metres deep and approximately 3 million cubic meters of earth moved, some of it on existing streets and roadways in heavily residential neighbourhoods. “So we do have a fair number of shovels, dozers and graders. Of those 15 dozers and three graders are equipped with GPS. Without GPS, I don’t know how we could even approach doing this job.” A case in point: staking. Rahm points out that stakes are still helpful on smaller job sites, but not here where Amico runs about 200 to 250 trucks per day. “If you had stakes they’d be run over five minutes after you put them in.” The close proximity of existing roadways also makes for cramped working conditions for equipment operators. But with all the design information on machine location and blade position on their digital displays “operators don’t need a grade man out there 24/7 telling them every time they make a cut if they’re too deep or not too deep.” Another challenge, Rahm says, are elevations. Historically, batter boards and the keen eye of a project foreman were the principle means for ensuring proper slopes and curves. That’s less reliable on a job site that features two-dozen bridges and tunnels, and where equipment operators work from normal ground levels to 12-metres deep. To cut it once and to cut it right, says Rahm, Amico combines GPS technology with Leica’s TS-15 Total Station robotics systems “so that our tolerances are bang on.” “For the road cuts they’re running at 30 mils both on the sub grade and stone grade. But when we go to do the fine grading with the GPS and the total station it’s not a problem to get to 10 or even five mils.” Brad Hoey is not convinced. When you grade and pave for airport runways you work to very fine tolerances, says the project manager for Island Asphalt Company in Victoria. GPS will give you that only if you supplement your dozer or grader with conventional laser beam technology. “GPS can tell us elevation differences but it can’t correct within the proper tolerances that we need for paving. It’s not fine enough.” Hoey’s reluctance to use GPS seems to be borne out by the rest of the industry, of which only about 30 to 35 per cent have invested in GPS technology. Tony Vanneman admits GPS manufacturers have a way to go before the contracting industry fully embraces GPS or even laser-guided technology. What those who don’t use it fail to appreciate, he says, is how far these technologies have come. Today’s laser technologies, for example, are a far cry from the conventional horizontal lasers used 30 years ago and he cites TopCon’s own Laser Zone as an example: a 10-meter high “wall of light” that can cover a job site up to 1,000 feet from the transmitter. “Anywhere within that laser zone an innumerable number of machines, machine and man rovers can utilize this signal. It augments or enhances that conventional GPS-only signal and that’s how we’re able to get down to significantly tighter tolerances, whether you’re grading or asphalt or concrete paving.” Peter Robson and Len Friesen are certainly sold on the technologies. Robson is director of intelligent machine controls for Komatsu, Friesen is a commercial landscaper in Winkler, Man. Friesen had used GPS in landscaping before, but decided to make an even bigger investment last year after he expanded into commercial grading. The result: a JD 200 Excavator and JD 333 Skid Steer equipped with a Topcon GPS system (the first sold in Western Canada) and two JD 650J and 700J dozers with Topcon’s 3D-MC2. This system pairs Topcon’s GX-60 control box, GPS+ antenna, MC-R3 receiver and sensors with advanced controlling software to provide position updates up to 100 times per second. A costly venture at $150,000 per machine? Sure, says Friesen, “but there’s a cost factor in not being able to do your job as well.” “If you’re waiting for someone to give you grades and you underfill or over fill you’re going back to do the same job twice.” A bonus, adds Robson, is a relatively quick return on investment in GPS technologies of “anywhere from 12 to 18 months.” Trimble’s Hester calls that “a good estimate,” but says paving contractors can do even better once their bonuses kick in. “If we’re helping you to achieve the smoothest surface out there so that you can guarantee that you’re going to get your smoothness bonus, you could see a much faster rate of return,” says Robson, all this could push more contractors towards GPS. “I don’t see why in five to six years time it wouldn’t be getting close to everybody using it.” Greater acceptance is more likely to occur as GPS improves—in particular, says Scott Rahm, in the number of satellite systems available to ensure greater global penetration of satellites signals and greater accuracy. That could mean supplementing the American Navistar and Russian GLONASS systems with the European and Chinese systems. “It tightens the accuracy more and more the longer we go, which is all we can hope for.” Rahm looks forward to a day when GPS will enable contractors to do a job in a single pass “so that you go from the initial cut right down to the millimeter.” “We’re getting better at that every day. We just hope that continues and we can get better accuracies the further we go.” David Godkin is a B.C.-based freelance writer and regular contributor to On-Site. Send comments to [email protected].
If you think that the use of plastic items spell an environmental concern, you may not be entirely right. Technology has made it possible to manufacture environment-friendly plastic polymers, unlike their former non-biodegradable counterparts. Contrary to popular belief, plastic industry can also contribute to a safer, greener environment with a few considerations in place. If you’re wondering what opportunities the plastic manufacturing sector might hold for you, read on find out more about the largely unused potential of bio – degradable plastic manufacturing in India and how you can benefit from it. Biodegradable plastic glass manufacturing - An unexplored oppurtunity The fast growing economy of India has seen a surge in private sector employment opportunities. Numerous offices have mushroomed in both the metropolitan as well as suburban areas of the country. Add to that coffee shops that have opened up in every nook and cranny and all of it will hint towards an increase in the demand of disposable plates and glasses. The uses of bio - degradable plastic plates and glasses are many. They are: Easy to use Easy to dispose But, the plastic plates and glasses available in the market often do not conform to environmental safety standards. This is where you, as a new age digital entrepreneur can step in and bring a change. As a plastic glass manufacturer, you can start making eco-friendly, bio-degradable plastic plates and glasses. While the initial investment may be a little bit on the higher side, if you carefully plan the venture, you’ll reap benefits in the long run. Contradictory as it may sound, there are three types of eco-friendly plastics, namely: As a plastic glass manufacturer, you can opt for manufacturing biodegradable plastic glasses. Biodegradable plastic glasses are made of only one type of base material, are uncoloured, and hygienic compared to their non-biodegradable variants. Another option is recycling plastic glasses already in circulation. You may think collecting and recycling plastic glasses is a tedious task, but the coming of glass collecting bins has made it easy. Now that you know only a few considerations can help you become an eco-friendly plastic glass manufacturer, you can build your brand value around the biodegradability of your plastic items. Since multiple government policies are being put in place banning the use of plastic products that don’t meet certain benchmarks, the scope for eco-friendly plastic products is immense. Make good the chance, and become an eco-friendly plastic glass manufacturer today. You can cater to a large base of customers by selling your products online. Register as a seller on Amazon and help your business grow with the help of Amazon’s diverse seller services.
Importance Of Statistics In Hospitality Industry, Lg Oven Won't Turn Off, Quadrature Mirror Relation, Arc Tool Missing Illustrator, Receta Quiche Espinacas, West Palm Beach Long Term Rentals, 4x6 White Cove Base Tile, Glycerine Meaning In Tagalog, 8 Body Type Constitution Quiz, " />Importance Of Statistics In Hospitality Industry, Lg Oven Won't Turn Off, Quadrature Mirror Relation, Arc Tool Missing Illustrator, Receta Quiche Espinacas, West Palm Beach Long Term Rentals, 4x6 White Cove Base Tile, Glycerine Meaning In Tagalog, 8 Body Type Constitution Quiz, " />Importance Of Statistics In Hospitality Industry, Lg Oven Won't Turn Off, Quadrature Mirror Relation, Arc Tool Missing Illustrator, Receta Quiche Espinacas, West Palm Beach Long Term Rentals, 4x6 White Cove Base Tile, Glycerine Meaning In Tagalog, 8 Body Type Constitution Quiz, " /> She further states that Romeo will beautify the night’s sky with his images across the expanse of the sky. The word originates from the latin word ‘fatum’ which can be translated as “something that has been spoken” (Merriam-Webster Encyclopaedia). Romeo and Juliet is a Shakespearean play. Human nature killed Romeo and Juliet. Romeo and Juliet. Coming of Age 6. Gold continues to represent wealth and jealousy, the vices that keep Romeo and Juliet apart. Helping Juliet to flee might have harmed her family and would not have stayed between the protagonists only. It ends with the guests leaving. actually Juliet is 13. it says in the book she is just shy of 14. juliet is exactly 14 and romeo is about 16/17. And yet, to my teen be it spoken, I have but four--. The fighting scene between Mercutio and Tybalt and later on between Tybalt and Romeo contains many references to fate. It is also the Friar who comes up with a plan to save Juliet but fails to correctly put it into practise. It was among Shakespeare's most popular plays during his lifetime and, along with Hamlet, is one of his most frequently performed plays.Today, the title characters are regarded as archetypal young lovers. The Merriam-Webster defintion of ‘fate’ adds a level of causation to the meaning of the word (Merriam-Webster Encyclopaedia). Benvolio and Mercutio look for Romeo. To write a tragedy that did not culminate in death would not fit into the genre. Throughout his plays he treated members of almost every religion with generosity (cf. Juliet sees Romeo dead beside her, and surmises from the empty vial that he has drunk poison. He then mistakenly thinks that Juliet is dead (in fact she is only drugged and is waiting for Romeo to come and rescue her). It was just a normal day. The only things next to them is a puddle of water, broken glass, and a shelf. Several hours pass during this scene even though it takes only a few minutes to perform. Throughout the play and the character’s utterances one cannot be sure whether fate is really judged as something which can not be changed and is already decided upon or whether fate is as Juliet calls it ‘fickle’ and every other circumstance might change the outcome. No one can say if they went to Heaven or not. Wells, 2001, p. 419). | Certified Educator After Romeo hears that Juliet has died (she has faked her death by taking a sleeping potion) he gets poison from the apothecary and goes to Juliet's tomb. After reading what you wrote you said how can she compare this to Romeo and Juliet and this is how she does.The way that she compares this to Romeo and Juliet is she ran away with a guy who she thinks she loves but the way that it isnt like Romeo and Juliet is she says that her love for him aint enough for her to die for him.So i guess in a way it is like it but in another way it isnt. The topic of Religion has long been denied any importance for Shakespeare’s plays. Back in Shakespeare's time, it was more interesting to the audience for it to end in death. Mercutio, Tybalt and Paris … Maybe the death of Mercutio can be called bad luck but here Romeo killed Tybalt on purpose, being obliged to his friend and enraged about his death. Romeo consumes his poison and dies, however, Juliet soon stirs awake and discovers Romeo dead at her side. A newspaper reporting such news would likely call it horror, not tragedy. It's too bad that the word "tragedy" has lost its meaning, as there is no other word that can be used the same way. Romeo and Juliet meet at the party and fall in love. The tragic suicides of star-crossed lovers Romeo and Juliet are the most famous deaths in the play. Now Romeo and Juliet may be together in a better place were there names don’t matter. medicine, art, film, history, politics, ethics, and religion. When he hears about Juliet’s death all his hopes are shattered and he starts to hate the stars and their way of meddling with his life (5.1.24). Shakespeare was baptized on the 26. of April in 1564, probobaly only some days after being born, in Holy Trinty church in Stratford-upon-Avon (cf. Paris was going in to give Juliet flowers and Romeo was saying things like "I want to lay with Juliet." Afterwards all three previously mentioned reasons for the couple’s death will be illustrated and analyzed. Juliet dies about two weeks prior to her 14th birthday. Romeo is about 16 or 17. Bolton slams Trump's 'incoherent' rant on military. With the prologue Shakespeare foretells the ending of his play. Romeo could have mourned about Juliet but stayed in Mantua. The play starts with a prologue in which a chorus informs the audience about the fate of the young lovers. Therefore, Romeo and Juliet must die since Shakespeare was writing a tragedy. She dies upon Romeo’s body. This scene is omitted from film versions. By so doing, she takes control of her own destiny instead of allowing her life to be controlled by others, as so many young women in her circumstances at that time would. The Friar is a holy and respected man and should have stayed with Juliet, knowing that she was in no condition to deal with Romeo's death. Fate also caused Juliet’s cousin and Romeo’s friend to die in the same day. The poor man should have never sold Romeo the poison to kill himself in the first place. Romeo and Juliet quotes that stand the test of time. Their deaths were caused by many factors such as their destined fate to die, unlucky happenings and misfortune and their sudden adolescent passion. Examination Questions on Romeo and Juliet Question: Why has Shakespeare ended Mercutio's dramatic life so early in the play? For instance, Romeo and … All we know if that they both killed themselves. Shakespeare’s religious background 1.3. Perhaps Romeo and Juliet were fated to love—and die—for the greater good of Verona. Juliet does threaten to kill herself if the Friar can’t come up with a plausible solution. It begins with the servants setting up and the guests arriving. That means that Juliet definitely would not have known Romeo without fate helping. Smith, 2012, p. 159). Instead, Romeo only heard that Juliet had died, so he intends to die at her side in a symbolic act of eternal love. Romeo and Juliet meet at the party and fall in love. On the one hand side he knows that he will have to revenge his friend’s death when he gets the chance but on the other hand side he can also be sure that his doing so can only lead to more sorrow. An actual tragedy is when someone is destroyed by their own nature. Thus his weakness caused him to choose to leave, with no help from fate, and the death of Juliet. ACT 3, SCENE 4. He could have started all over and Juliet could have done the same after finding out that Romeo killed himself. The play Romeo and Juliet brings out a theme of fate, which turns out only to be surface deep. Immature Love 5. - Completely free - with ISBN Romeo and Juliet Essay Topics: “Romeo and Juliet” is one of the most famous plays that have survived decades. With these words Romeo utters a presentiment of danger which goes hand in hand with the prologue’s announcements. He gets extremely sad and dishearten. The media almost always says it's "tragic" when a bus crashes and a bunch of children die. Shakespeare also uses the recurring motif of gold and silver to criticize the childishness of the feuding adults. Publisher Nicholas Rowewas the first critic to ponder the th… Throughout the play the church, personified by Friar Laurence, plays an important role. Romeo and Juliet Essay Topics 2020 for Students and Examples. Only in recent years have scholars come to the conclusion that Religion was a powerful cultural structure of the sixteenth century and did probably affect the author to some extent. working on her first novel. The time has now come for Romeo to assert himself and take his place as the hero. The earliest known critic of the play was diarist Samuel Pepys, who wrote in 1662: "it is a play of itself the worst that I ever heard in my life." The play also contains many chances which can easily be appointed as agents of fate (cf. The Friar, as Romeo and Juliet’s trustee, and their religion is a common ground for both protagonists. This happens all the time. 11. Shakespeare wrote during a time when plays were either comedies or tragedies. Tybalt is angry that Romeo came. Both of them were in control of their lives when they decided to kill themselves. Next to that there are many incidents and circumstances in the play which could not have been influenced by neither Romeo nor Juliet which would also coincide with a definition of fate but in the end both protagonists do have the chance to change their destiny. In addition, pride- a natural human flaw- caused the grudge, made Romeo incapable of walking away from the fight, and made Friar Lawrence feel the need to come up with an elaborate plan for the two teenagers' eloping. Romeo - Took poison when he believed Juliet to be dead (though would have seen her alive if he waited a little longer) Juliet - Stabbed herself after finding Romeo dead. Answer (1 of 12): Romeo killed himself by drinking poison. The poor man should have never sold Romeo the poison to kill himself in the first place. Many wonder why Romeo and Juliet must end with the title characters’ deaths. 0 0. Finally, Romeo and Juliet died because their deaths were determined by fate and the prediction from the stars. Romeo was at the wrong time, in the wrong place when Mercutio is killed and Tybalt threatens his life. Perhaps Romeo and Juliet were fated to love—and die—for the greater good of Verona. It is debatable why Friar Laurence advises Juliet to fake her death in the first place and does not take her into a nunnery right away. Juliet also shows her inner strength and independent nature in her decision to die rather than marry Paris: "If all else fail, myself have power to die." Conclusion 8. Instead, they might put on robot suits and conquer Verona. Poet John Dryden wrote 10 years later in praise of the play and its comic character Mercutio: "Shakespear show'd the best of his skill in his Mercutio, and he said himself, that he was forc'd to kill him in the third Act, to prevent being killed by him." That's not what the word means. But given the society in which Shakespeare and his characters lived this would have been a sin too. Let's see how many people actually read this before they answer And it is a must-read. Shakespeare’s original Romeo & Juliet text is extremely long, so we’ve split the text into one Act & Scene per page. The storyline was not new to the world and originates in a poem by Arthur Brooke: The Tragicall Historye of Romeus and Juliet (cf. The proloque also suggests that the couple can not do anything to change their destiny and that they are facing an unlucky future wich later on culminates in their suicide. Note II : i: Sunday : Late evening: Romeo looks for Juliet. She's not fourteen. Schmiele, 1963, p. 108). Many times he reduces an intervention of God or other spirits to their presence. Comedy ends in marriage and tragedy in death. This page contains the original text of Act 3, Scene 4 of Romeo & Juliet. 1 decade ago. - It only takes five minutes - Publication as eBook and book However, this did not mean that the Nurse would have had any more say in Juliet’s upbringing than Capulet and Lady Capulet. About the newspaper report: Newspapers, schmewspapers. Were Romeo and Juliet Victims of Circumstance? FunTrivia.com. Wikibuy Review: A Free Tool That Saves You Time and Money, 15 Creative Ways to Save Money That Actually Work. Since Romeo thinks Juliet is dead he wants to die next to his wife, so he buys a poison from a poor man. “With love’s light wings did I o’erperch these walls, For stony limits cannot hold love out.” – Romeo. Were Romeo and Juliet Victims of Circumstance? Instead, he kills himself because of adolescent passion and Juliet dies for him in return. Other readers may examine the play through the lens of happenstance and coincidence, and thus conclude that Romeo and Juliet's fates were not wholly predetermined but rather a series of unfortunate and unlucky events. All in all ‘fate’ is regarded as a phenomenon of occurring circumstances which lead to either a happy or bad ending of something or soemeone. Thank you this was very helpful in understanding Shakespeare also a big help to my assignment. Juliet hopes that their fortune may be changeable and will bring Romeo back to her saying: “O Fortune, Fortune! Even before Romeo gets to know Juliet he says: I fear to early, for my mind misgives Some early consequence yet hanging in the stars Shall bitterly begin his fearful date With this night’s revels, and expire the term Of a despised life closed in my breast By some vile forfeit of untimely death (1.4. Romeo and Juliet have no flaws, and aren't old enough to be blamed if they did. Like others have said, they acted foolishly and became blind to everyone but them. He indicates that destiny will start to take its course that night and that it will bring along bad consequences for him. He also thinks that love that is pursued violently will also end violently and will “[…] in [its] triumph end like fire and powder […]” (2.5.10). To the end of the play Juliet does get the chance to leave her family forever but she does not take it. Still the sentence contains a contradiction since Romeo himself killed Tybalt, with his own hands. http://oxforddictionaries.com/definition/english/fate, http://www.merriam-webster.com/dictionary/fate. The message did not reach Romeo by some means, and later Romeo get to know about Juliet's fake death from his servant Balthasar. Although she secretly marries Romeo she does not want to leave her family dishonorably. The Capulets only knew of Romeo, but they had never met him. Although the Friar acts very modern when he allows Romeo and Juliet to decide whom they want to marry on their own, he does not seem to be ready to undermine the conditions of their patriarchal society one bit further which does also explain why he left Juliet in the tomb. Juliet is 13 and is almost 14. – Juliet. Where did Romeo and Juliet die? The first version of “The most excellent and lamentable tragedy of Romeo and Juliet” was released between 1594 and 1597 when Shakespeare was about 30 years old (cf. Near the end, I cried my eyes out! By writing the play, Shakespeare began the shaping of modern drama, in which the fates of ordinary people are as crucial as those of the great. Here the text suggests that Romeo blames the stars and therefore fate for killing Juliet. Shakespearean Tragedy 2. Fate might be a well chosen scapegoat to justify Romeo and Juliets deaths but overall the text suggests that it is not the reason why the play had to end badly. Instead, he kills himself because of adolescent passion and Juliet dies for him in return. Shakespeare wrote Romeo and Juliet during a time when most of his other plays were of comedic character. Although Romeo is not married both of them will spend their marriage in grave. Everything outside of Verona seems hell to him because Juliet will not be there. Im doing a project for school on Romeo and Juliet. “Romeo and Juliet” is a tragedy about two ‘star-crossed lover’ ending their lives for each other’s passion and love. The death of either seems like the end of the world to both characters. In Romeo and/or Juliet, the "star cross'd lovers" don't necessarily have to die. Shakespeare lived in a time of radical change probably between 1564 and 1616 (cf. Juliet would have greatly disobeyed her parents and everyone would have known. They were popular at the time. Tricia lives in Northern California and is currently Since Romeo thinks Juliet is dead he wants to die next to his wife, so he buys a poison from a poor man. He asks the Friar for a knife or some poison so that he can kill himself (3.3.44). Henry the VIII’s break with the Catholic Church (1534) and Queen Elizabeth’s conformity with the protestant belief indicated a formal religious union in society which was not really practiced by the common people. "Give me my Romeo; and, when I shall die,Take him and cut him out in little stars,And he will make the face of heaven so fineThat all the world will be in love with night." You shouldn't blame Romeo or Juliet for their deaths because it's so obvious that you shouldn't blame their families. It begins with the servants setting up and the guests arriving. Instead, he kills himself because of adolescent passion and Juliet dies for him in return. Anyway, the point is that the word "tragedy" has a very specific meaning. Parents have a duty to make certain decisions for their children, as they did in the 16th century, however at that time a girl or women would barely ever make her own decisions, these would be made by her parents. The appeal of the young hero and heroine is such that they have become, in the popular imagination, the representative of star-crossed lovers. He takes the poison because he has killed the man Juliet should have married but did not want to (Paris). “Wisely and slow; they stumble that run fast” (2.2.94). In the 1996 movie, Romeo + Juliet, why does Juliet just sit there and watch Romeo die? They were popular at the time. Dunton-Dower & Riding, 2004, p. 305). Cummings, 2012, pp. The ‘Oxford English Dictionary’ defines ‘fate’ as a “[…] development of events outside a person’s control [which is] regarded as predetermined by a supernatural power” (Oxford Dictionaries). Romeo then drank his potion presuming her dead and died. It is evident that the untimely deaths of Romeo and Juliet were the result of the feud between the Capulet and Montague. She's awake and rational enough to figure out that he's taken poison. Wells, 2001, p. 419). All these quotes show how the theme death was portrayed by the characters Mercutio, Romeo, and Juliet in the play Romeo and Juliet. Another reason for the death of Romeo and Juliet is based on the expectations of … Smith, 2012, p. 158; McAllindon, 1991, p. 60). Juliet Capulet (Italian: Giulietta Capuleti) is the female protagonist in William Shakespeare's romantic tragedy Romeo and Juliet.A 13-year-old girl, Juliet is the only daughter of the patriarch of the House of Capulet.She falls in love with the male protagonist Romeo, a member of the House of Montague, with which the Capulets have a blood feud. There remains some debate as to whether it is advisable to teach the play to middle school aged children. This definition of fate can also be employed for Romeo and Juliet. Fate 3. Romeo and Juliet are found dead laying on the floor. He wants her to go into a nunnery and endure her life as a widow. At the beginning of Act III, scene v, Romeo and Juliet are together in Juliet’s bed just before dawn, having spent the night with each other and feeling reluctant to separate. Also where did they die did they die in a cave or what? Bringing Romeo and Juliet and Shakespeare’s wish of writing a tragedy together one might suggest that solely that wish could be a reason for the play’s ending. The seemingly predetermined tragic ending of his play is considerably confronted with an unforeseeable comedic opposite (cf. He buys a strong poison for himself and comes to Verona in secret, and visits the family's crypt, where he sees Paris feeling sorry over Juliet's dead body. Romeo immediately brought poison when he heard Juliet’s mocked death, he should have asked Balthasar to investigate further and asked Friar. Romeo and Juliet die in Shakespeare’s play because Shakespeare loved to write tragedies. If he had been writing a comedy, they would have married and their families would have likely reconciled. It is true that Romeo and Juliet are quite young, but they would have been considered of marriageable age in Shakespeare’s time. “Give me my Romeo, and when I shall die, take him and cut him out in little stars.” These words of Juliet show her intense love for Romeo. Cinematic Realization Sections Homepage Trivia Quizzes Free Trivia Questions Player Quiz Lists Ask FunTrivia - Get Answers to Questions Daily and Hourly Trivia Games Crossword Puzzles FunTrivia Discussions Forums Trivia Chat Trivia Questions Archive. Another reason for the death of Romeo and Juliet is based on the expectations of Elizabethan drama. In Romeo and/or Juliet, the "star cross'd lovers" don't necessarily have to die. Human nature killed Romeo and Juliet. They didn't choose to die; it just happened because they loved each other. The paper is going to explore three different reasons: the (seemingly) inevitable fate of the “star-crossed lovers” (Prologue, 6), the danger of immature love and the feud with its consequences for society, family, and coming of age. 12. Ultimately the paper will try to find out what Shakespeare might have wanted to tell his audience and how his messages are conveyed by recent film adaptations. Juliet refuses much to her father’s distain. Religion 4. Davis, 1996, p. 57). Romeo and Juliet is a tragedy written by William Shakespeare early in his career about two young star-crossed lovers whose deaths ultimately reconcile their feuding families. Romeo is thrashing around for about 5 minutes after Juliet wakes up. In order to answer this question it would be crucial to know wether Shakespeare wrote the prologue prior to coming up with an ending for his play or whether he added the prologue after finishing it. The couple can only see things through their own perspective, and had neither wisdom nor forbearance. In his day Shakespeare was on the one hand side influenced by classic tragedies but also by the writings of his contemporaries. Lady Montague - Died from heartbreak upon hearing about her son's banishment. Instead, the idea of caution is arguably more applicable to Romeo and Juliet's families, who have allowed their feud to get out of control. Arthur Brook is said to have “[…] used a moralistic French adaption, by Pierre Boaistuau, of a story by the Italian Matteo Bandello in his Le novella di Bandello, and Shakespeare probably also read William Painter’s translation of Boaistuau in his Place of Pleasure, of 1567” (Wells, 1996, p. 1). Juliet is told that she is to marry Paris on Thursday. Count Paris - Stabbed by Romeo in a duel. When Friar Laurence hears that Friar John was not able to deliver the letter to Romeo he tries to set his plan right and goes to Juliet’s monument. She has faked her death to avoid a prearranged marriage with Paris. Romeo and Juliet had to die because the play is a tragedy and all tragedies end in death. It's much like the word "literally", which has also drowned in the murky cesspool of modern LOLnguage. Relying on concrete matters of fact like Shakespeare’s baptism, his marriage and his last will scholars assume that he was a lifelong member of the Anglican Chruch. Getting married is an act of pure love but running away would be an act of pure badness (3.5). Come Lammas Eve at night shall she be fourteen. The message did not reach Romeo by some means, and later Romeo get to know about Juliet's fake death from his servant Balthasar. She is especially passionate about reading and writing, although her other interests include In Romeo and Juliet by Shakespeare, Romeo held Juliet's mortal life in his hands which he did not care for like he was suppose to do. All men call thee fickle; If thou art fickle, what dost thou with him That is renown’d for faith? Juliet took a fake potion to make her sleep but appear to be dead. 1.1 The play How did the Nurses daughter die in Romeo and Juliet - trivia question /questions answer / answers. In fact, the goal is to fake their deaths so they can run away and continue their married lives, since they have married in Act II. It ends with the guests leaving. 1.3. contributor for many years. It's like the ending was already written for them. Often viewed as accidental, since Tybalt may have been trying to kill Romeo. In general, such drama was split into two categories: comedy and tragedy. Scene 5: Romeo bids Juliet an emotional farewell after spending the night together.Lady Capulet believes that Tybalt’s death is the cause of her daughter’s misery and threatens to kill Romeo with poison. Cummings, 2012, pp. 663-664). These elements are the reasons of tragic moments of this Shakespearean play. All acts & scenes are listed on the Romeo & Juliet original text page, or linked to from the bottom of this page. The parents of Juliet helps to aid the events that lead to the death of their own daughter. Now that we know all the people who could be blamed it is time to talk about the people with most blame. Last night, I watched the entire movie on Netflix. Prick love for pricking and you beat love down.” –Romeo and Juliet. It is often the first Shakespeare play children read, but with many suicide pacts in modern times, some consider teaching the play to impressionable teens to be courting disaster. ). Davies, 2001, p. 397; Smith, 2012, p. 157) . “ROMEO: …I have more care to stay than will to go.” – Juliet. Romeo and Juliet die in Shakespeare’s play because Shakespeare loved to write tragedies. We might conclude that we’re meant to infer that they just had sex, and that may be the way the scene is most commonly understood. First of all, it will be looked at the play’s history, the societal environment during its emergence and Shakespeare’s religious background which are of utmost importance to interpret the author’s ideas. 1. Scholars are not entirely sure what Shakespeare believed in and whether he felt home in the Catholic Church or the Reformation (cf. A room in Capulet’s house. The Nurse remembers that Juliet’s childhood was full of unlucky omens: there was an earthquake the day Juliet was weaned, and when she learned to walk she “broke her brow” (1.3. It's as if there was no chance that the person could have survived the situation, because they acted according to their basic nature. Importance Of Statistics In Hospitality Industry, Lg Oven Won't Turn Off, Quadrature Mirror Relation, Arc Tool Missing Illustrator, Receta Quiche Espinacas, West Palm Beach Long Term Rentals, 4x6 White Cove Base Tile, Glycerine Meaning In Tagalog, 8 Body Type Constitution Quiz,
Enabling multi-faceted measures of success for protected area management in Trinidad and Tobago A key challenge has been to define and measure "success" in managing protected areas. A case study was conducted of efforts to evaluate the new protected area management system in Trinidad and Tobago using a participatory approach. The aim of the case study was to (1) examine whether stakeholder involvement better captures the multi-faceted nature of success and (2) identify the role and influence of various stakeholder groups in this process. An holistic and systematic framework was developed with stakeholder input that facilitated the integration of expert and lay knowledge, a broad emphasis on ecological, socio-economic, and institutional aspects, and the use of both quantitative and qualitative data allowing the evaluation to capture the multi-faceted nature and impacts of protected area management. Input from primary stakeholders, such as local communities, was critical as they have a high stake in protected area outcomes. Secondary and external stakeholders, including government agencies, non-governmental organizations, academia and the private sector, were also important in providing valuable technical assistance and serving as mediators. However, a lack of consensus over priorities, politics, and limited stakeholder capacity and data access pose significant barriers to engaging stakeholders to effectively measure the management success of protected areas. If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large. As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it. References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: - Mathie, Alison & Greene, Jennifer C., 1997. "Stakeholder participation in evaluation: How important is diversity?," Evaluation and Program Planning, Elsevier, vol. 20(3), pages 279-285, August. - Parkinson, Sarah, 2009. "Power and perceptions in participatory monitoring and evaluation," Evaluation and Program Planning, Elsevier, vol. 32(3), pages 229-237, August. - Emma Tompkins & W Neil Adger & Katrina Brown, 2002. "Institutional networks for inclusive coastal management in Trinidad and Tobago," Environment and Planning A, Pion Ltd, London, vol. 34(6), pages 1095-1111, June. When requesting a correction, please mention this item's handle: RePEc:eee:epplan:v:34:y:2011:i:3:p:185-195. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei) If references are entirely missing, you can add them using this form.
Seven to 10 percent of all the high-tech products sold worldwide are counterfeits, it's estimated. In 2004, U.S. Customs seized $200 million in counterfeit goods, $134 million of which originated in China. Trade in counterfeit goods cost U.S. businesses an estimated $2 billion per year in copyright losses alone. Both consumers and manufacturers are adversely affected. The flash memory card that someone bought on eBay for a too-good-to-be-true price is likely to be just that: too good to be true. Internet forums abound with stories of people purchasing flash cards only to find that their memory capacity is less than claimed. In other cases the card simply does not work at all. Over the years, counterfeiters' sophistication has increased to the point that it is difficult to visually identify the fakes unless you compare them side-by-side with the genuine article. In some cases, spelling mistakes in the packaging give it away. In other cases, it may be the way in which the plastic case is sealed. Manufacturers see counterfeiters as a problem for different reasons. The obvious problem is loss of revenue. Sales of counterfeit parts are, to an extent, lost sales for the manufacturer of the genuine device. Though no one knows for sure, it is estimated that every year $1 billion to $10 billion of sales revenue is lost to counterfeit goods. Another cost relates to customer service and support. When a counterfeit fails or does not perform up to standards, the unwitting consumer may return the device for replacement or repair. Though manufacturers are not responsible for the counterfeits, it still takes time to service the customer. Some manufacturers will replace the counterfeit with a genuine product to improve customer relations. In any case, if a large number of counterfeits from a manufacturer are known to be circulating in the marketplace, it can seriously damage a company's reputation and its brand value. Companies invest years of time and large sums of money developing the value of their brand. BusinessWeek and Interbrand tracked the value of corporate brands and published a list of the top 100 global brands in 2005. Topping the list were such household names such as Coca-Cola at $76 billion and Microsoft, $60 billion. Electronics manufacturers include Intel, Samsung and Sony (see table). Dollar values attributed to each brand were calculated from publicly available data, projected profits and such variables as market leadership. It takes into account the company's reputation for reliability and quality. Reputation is hard to nurture and develop and can be easily hurt by the bad publicity caused by significant numbers of counterfeit devices in circulation.
Unused ideas abound in many companies. When Procter & Gamble surveyed all of the patents it owned, it determined that about 10% of them were in active use in at least one P&G business, and that many of the remaining 90% of patents had no business value of any kind to P&G (Sakkab, 2002). Yes, not letting these patents go to the market would prevent the cost of false positives but why not let these test with different business models? Taking these remaining patents to the outside might cause a lot of internal resistance but the costs of letting them remain unused are even higher. There are subtle business models that have emerged in the “creative commons” arena. One example of such a model is when companies voluntarily choose to donate portions of their intellectual property to a “commons”, so that they and others can practice their technologies freely without fear of being sued for patent infringement. This would boost the amount of innovative activity in the area, and effectively lower the cost of producing useful output for customers of this activity. Intel has boosted innovation by creating “lablets” that work closely with universities to collaborate on research that will be published, and not owned by Intel. IBM recently created a powerful example of this in their decision to transfer 500 software patents to a nonprofit foundation in the open source community. Instead of having to pay Microsoft or another company for a proprietary operatinOpen Business Models g system, open source guarantees a cheaper alternative that will work well with IBM’s products and services. Whether you are in a large organization or a small one, chances are you need to open up your innovation processes. But in order to do this effectively, you must connect your business model to your innovation process. Companies that are large typically enjoy strong business models but It is harder for them to change their business models, in order to exploit open innovation opportunities. Small companies, on the other hand, lack the strong business model and resources to enable them to exploit the opportunities of open innovation without fear of being copied by a larger foe. IP protection can only be one of many tools in their business model to strive to reach success. For more information, see Ch. 2 Open Business Models by Henry Chesbrough
PM Concepts: Processes and Knowledge Areas The goal of this article is to outline Project Management Processes and PM Knowledge Areas of a project. You will cover five project management process groups and nine PM Knowledge Areas. You will also list 44 major processes that comprise each process group and learn how these processes align with PM Knowledge Areas. Finally, I will summary each process. Project Management Processes of a Project Project management is archived via processes. A Project Management Process is defined as "a set of interrelated actions and activities that are performed to achieve a pre-specified set of products, results, or services" (PMBOK® Guide, Chapter 3). Almost all projects usually use the same set of processes to accomplish project management successfully. The Project Management team is responsible for selecting appropriate processes to meet/comply with project requirements and balance the "triple constraints" (time, scope, and budget) of a project. Each process and its inputs and outputs should serve as a high-level guide for a project management team. The project management team should "tailor" each process to the individual needs of a project. Project Management Processes deal with initiation, execution, monitoring, control, and closing a project. All processes interact throughout the project via their constituent inputs and outputs. "Successful project management includes actively managing these interactions to successfully meet stakeholder requirements." (PMBOK® Guide, Chapter 3) Project Management Process Groups The management processes are aggregated into five project management process groups: - Initiating Process Group: Defines and authorizes the project or a project phase - Planning Process Group: Defines and refines objectives, and plans the course of action required to attain the objective and scope - Executing Process Group: Integrates people and resources to carry out the project management plan for the project - Monitoring and Controlling Initiating Process Group: Measures and monitors progress to identify if the correction action can be taken to meet project objectives - Closing Process Group: Formalizes acceptance of the product, services, or result and brings the project or a project phase to an orderly end The interaction among five process groups is depicted by the following figure that is derived from a simpler Plan-Do-Check-Act (PDCA) cycle diagram. Figure 1: Modified PDCA Cycle Diagram Project Management Knowledge Areas There are nine Project Management Knowledge Areas. These areas group 44 Project Management Processes. The following list briefly describes each PM Knowledge Area: - Project Integration Management: Deals with processes that integrate different aspects of project management. This knowledge area deals with developing Project Charter, Preliminary Project Scope, and Project Management Plan. It also deals with monitoring and controlling project work, integrated change control, and closing a project. - Project Scope Management: Encapsulates processes that are responsible for controlling project scope. It consists of Scope Planning, Definition, Verification, and Control. - Project Time Management: Includes processes concerning the time constraints of the project. It deals with Activity definition, sequencing, resource estimating, and duration estimating. It also deals with schedule development and control. - Project Cost Management: Includes processes concerning the cost constrains of the project. Some of the processes that are part of this knowledge area are Cost Estimating, Budgeting, and Control. - Project Quality Management: Describes the processes that assure that the project meets its quality obligations. It consists of Quality Planning, Quality Assurance, and Quality Control. - Project Human Resources Management: Includes the processes that deal with obtaining and managing the project team. Some of the processes of this knowledge area are Human Resource Planning, Acquire Project Team, Develop Project Team, and Manage Project Team. - Project Communication Management: Describes the processes concerning communication mechanisms of a project, namely, Communication Planning, Performance Reporting, and Information Distribution. - Project Risk Management: Describes the processes concerned with project-related risk management. It consists of Risk Identification, Quantitative and Qualitative Risk Analysis, Risk Response Planning, and Risk Monitoring. - Project Procurement Management: Includes all processes that deal with obtaining products and services needed to complete a project. It consists of Plan Contracting, Select Seller Responses, Select Seller, and Contract Closer. Project Management Process Mapping As I stated earlier in the article, Project Management is composed of 44 processes that are mapped to one of nine Project Management Knowledge Areas listed in the previous section. The following table maps 44 processes to process groups and knowledge areas. |Knowledge Area Processes||Project Management Process Groups| |Initiating Process Group||Planning Process Group||Executing Process Group||Monitoring and Controlling Process Group||Closing Process Group| |Project Management Integration|| Develop Project Charter Develop Preliminary Project Scope Statement |Develop Project Management Plan||Direct and Manage Project Execution|| Monitor and Control Project Work Integrated Change Control |Project Scope Management|| |Project Time Management|| Activity Resource Estimating Activity Duration Estimation |Project Cost Management|| |Project Quality Management||Quality Planning||Perform Quality Assurance||Perform Quality Control| |Project Human Resources Management||Human Resources Planning|| Acquire Project Team Develop Project Team |Manage Project Team| |Project Communication Management||Communications Planning||Information Distribution|| |Project Procurement Planning|| Plan Purchase and Acquisitions Request Seller Responses |Contract Administration||Contract Closure| |Project Risk Management|| Risk Management Planning Qualitative Risk Analysis Quantitative Risk Analysis Risk Response Planning |Risk Monitoring and Control| Figure 2: Mapping of 44 processes to process groups and knowledge areas Page 1 of 3
- Inhabitat – Sustainable Design Innovation, Eco Architecture, Green Building - http://inhabitat.com - Cobra Energy Plans Massive $1 Billion Solar Thermal Plant for Australia Posted By Ariel Schwartz On March 18, 2010 @ 12:30 pm In Renewable Energy,Solar Power | No Comments We’ve seen our fair share of giant renewable energy projects , but we’re particularly intrigued by Cobra Energy ‘s plan to construct a $1 billion, 250 MW solar plant in Australia. The power plant won’t be as powerful as a typical coal-powered operation, but it will be one of the biggest solar thermal plants on the planet. When completed, Cobra’s plant will be powered by both photovoltaic panels and solar thermal technology, which uses boiled water to generate energy. Cobra’s key to effective solar thermal is the use of molten salts to store extra energy during the day. When the sun goes down, the salts can continue to generate power for another 7.5 hours. The Cobra plant isn’t a done deal quite yet — the Spanish company is still deciding between a number of sites in Australia. And while Cobra has applied for funds under the Australian government’s $1.5 billion solar flagships program, it has yet to receive approval. But if the deal goes through, Cobra’s plant could provide over half of the country’s goal of 400 MW of solar power. + Cobra Energy Via The Australian Article printed from Inhabitat – Sustainable Design Innovation, Eco Architecture, Green Building: http://inhabitat.com URL to article: http://inhabitat.com/cobra-energy-plans-massive-1-billion-solar-thermal-plant-for-australia/ URLs in this post: Image: http://www.inhabitat.com/2010/03/18/cobra-energy-plans-massive-1-billion-solar-thermal-plant-for-australia/solarsalt1/ renewable energy projects: http://www.inhabitat.com/2010/01/06/enviromission-plans-massive-solar-updraft-towers-for-arizona/ Cobra Energy: http://www.grupocobra.com/indexframe_in.html solar: http://www.inhabitat.com/2010/01/25/wal-mart-unveils-massive-solar-array-in-california/ Image: http://www.inhabitat.com/2010/03/18/cobra-energy-plans-massive-1-billion-solar-thermal-plant-for-australia/solarthermal/ molten salts: http://www.inhabitat.com/2008/01/14/energy-breakthrough-storing-solar-power-with-salt/ solar: http://www.inhabitat.com/2010/03/17/energy-generating-self-heating-solar-roadway-unveiled/ The Australian: http://www.theaustralian.com.au/business/spanish-giant-has-plans-for-1bn-solar-energy-plant/story-e6frg8zx-1225841104462 Copyright © 2011 Inhabitat Local - New York. All rights reserved.
Situated on 80 hectares of private land approximately 50km southeast of Geraldton in Western Australia (WA), sits the Greenough River Solar Farm, the first utility-scale photovoltaic (PV) project in Australia. Ten times larger than any other operating solar power plant in the country, the project highlights the immense potential of utility-scale solar to assist Australia in transitioning to a renewable energy future. Verve Energy, the leading generator of electricity in WA, and GE Energy Financial Services each own 50 percent of the solar farm. The project is funded with 100% equity, with the WA Government providing A$20 million, including A$10 million from the WA Royalties for Regions program. In addition to supplying the solar modules for the project, First Solar provided the engineering, procurement, and construction (EPC) services, as well as operations and maintenance (O&M) support. Local contractor, WBHO Civil, played a pivotal role in providing site preparation services, underground electrical services, and civil works, generating millions of dollars for the City of Greater Geraldton. The project created jobs for about 100 people at construction peak and is the impetus for other solar farms in Australia and helping to drive down the cost of solar energy.
In business, “sustainability” used to have a simple meaning, referring to a company’s ability to maintain and grow its operations and profits. Today, the term is much more worldly — often applying to the energy-efficient and environmentally friendly practices that a business undertakes in consideration of the greater good. Even with existing office spaces and limited budgets, businesses can incorporate sustainable practices into their daily operations that will deliver a significant return on investment — for the environment and their bottom line. “There are surprising ways that companies can save money by looking at where their insufficiencies are and implementing green practices,” says Anna Dengler, the director of sustainability at Great Forest, a New York City-based company that offers sustainability consulting for various companies, schools and other organizations. Here’s how companies can get started toward a greener path. Because compliance among employees can be a hurdle for implementing environmentally friendly programs, start with small, highly visible programs that deliver measurable results,Dengler recommends. Consider launching a recycling program or switching to energy-efficient compact fluorescent light bulbs – which consume up to 75 percent less energy than traditional incandescent bulbs, according to the Environmental Protection Agency (EPA). Initiatives such as these can help generate the buy-in that’s necessary to launch a larger sustainability program, Dengler says. Examine your energy waste Heating and cooling is the primary source of energy consumption in commercial buildings, according to the EPA, but upgrading to high-efficiency models is a significant expenditure for most companies. Just as homeowners are advised to find areas where conditioned air is being wasted before they buy energy-efficient windows or solar panels, Dengler recommends business owners or management start by asking themselves, “Are we using energy wisely?” The EPA estimates that office buildings waste up to one-third of the energy they consume. Heating and cooling vacant areas is a common contributor to this wasted energy, Dengler says. If possible, keep unused spaces separate and reduce the amount of conditioned air that they receive. Other inexpensive ways to cut energy costs include making sure that the space has a correctly calibrated and properly programmed thermostat that’s set to turn on less frequently during non-office hours. If you have control of the heating and cooling equipment, make sure that it receives regular maintenance so that it can function at its optimum, suggests Joshua Arias, a certified eco-consultant and vice president of Green Mojo Eco Consulting in New Jersey. After heating and cooling costs, lighting is the next most significant source of a company’s energy consumption, Arias says, and “general” lighting is one of the most wasteful components. “There is too much general lighting in offices,” he says. “What you need is task lighting.” Arias recommends allowing employees to determine how much light they need in their work area, and installing dimmer switches or occupancy sensors - which turn the lights on when motion is detected – in infrequently used spaces like kitchen areas or hallways.
What you’ll learn to do: consider and understand the need and value of ethics in business decisions In this outcome, you will see why a business known for its ethical reputation can be even more valuable than one known for legal compliance. Here are some of the specific things you’ll learn to do in this section: - Identify, understand and distinguish between alternative theories on ethics - Identify understand, explain and apply alternative ethical decision models to business decisions The learning activities for this section include: - Reading: What Is Ethics? - Reading: Major Ethical Perspectives - Reading: An Ethical Decision Model - Reading: Corporations and Corporate Governance Take time to review and reflect on each of these activities in order to improve your performance on the assessment for this section.
What Materials Have the Highest and the Lowest Recycling Rate in the US Based on the data provided by the United States Environmental Protection Agency (EPA), during 2017 about 25% of total municipal solid waste in the US was recycled, 10% was composted, 12.7% was combusted with energy recovery, while a staggering 52.1% was landfilled. These statistics show that the US is ranked quite low when recycling and composting are concerned, at least compared to other countries of the Western world, like Germany, Denmark, UK or Netherlands. Even though many American cities have shown serious initiative to reduce pollution and take a more radical approach to waste management, imposing strict bans on many harmful and unnecessary materials, the problem still isn’t resolved on state, federal or corporate levels. To shed a bit of light on this problem, as well as to see what is the state of recycling in the US, let us take a look at the most and least recycled materials. The least recycled materials According to the EPA, plastic was the third biggest contributor to the total amount of municipal solid waste in 2017, right after food and paper/paperboard waste. Even though plastic plays an important role in overall pollution and waste production, only 8% of total plastic generated in 2017 was recycled. Even though some positive changes were made since the problem still isn’t significantly closer to the solution than it was three years ago. During 2019, it was revealed that much of the plastics gathered for recycling actually ended up in landfills. Why? Market issues. Namely, China increased its quality requirements for plastic recycling in 2017 and stopped buying significant amounts of plastic waste from the US. Many US recycling companies failed to find new buyers, which resulted in large amounts of mixed plastic waste remaining in the country. Only 56% of total plastic waste the US once exported is now still being accepted by foreign markets. This is a serious problem, considering that mixed plastic is deemed unrecyclable. It often comes to landfilling or incineration at the end, and both of those methods are unsustainable in the long run. Even though it is 100% recyclable, glass is the second least recycled material in the US. This is quite odd considering that glass has very favorable qualities in terms of recycling and the whole process is quite an energy- and cost-efficient compared to making new glass from raw materials. There are quite a few things that may be the probable cause of this problem, but the ones that play the biggest roles are government policies and consumer education and habits. People often don’t separate their trash properly or leave it contaminated with food, liquids and similar, which often makes it unviable in terms of recycling. This behavior combined with single-stream curbside collection significantly decreases efficiency and increases efforts to recycle municipal waste. Glassmakers often find that it is economically unfeasible to process and prepare great amounts of glass waste into a furnace-ready form. These setbacks are the main reason why the recycling rate of glass in 2017 was only 27%. The most recycled materials Measured by percent of generation, lead-acid batteries are by far the most recycled materials in the US, according to data provided by the EPA. From 2010 until 2017, lead-acid batteries had a consistent recycling rate of 99.1%. This huge recycling efficiency is possible because lead-acid batteries have an almost closed-loop cycle of production. Every battery contains between 60% and 80% recycled lead and plastic, which makes the production process less polluting while also conserving natural resources. Corrugated boxes are the second most recycled material in the US, with a recycling rate of 88.4%. Thanks to the active commitment of the industry and numerous educational programs, corrugated cardboard recycling managed to achieve great success in terms of material acquisition and efficiency. Steel is also a highly recycled material. Measured by percent of generation, steel cans had a solid recycling rate of 70.9% in 2017. However, looking at ferrous metals in general, they have relatively low recycling rates even though they have favorable recycling qualities and are the largest category of metals found in municipal waste. In 2017, the rate of ferrous metal recycling from durable goods (small and big appliances, furniture, tires, etc.) was estimated to be only 27.8%. Also, in spite of all efforts, about 10.4 million tons of steel ended up in landfills, which means that steel made up 7.5% of all landfilled waste in 2017. Aluminum beer and soda cans hold the fourth place on this list, with a recycling rate of 49.2%. However, this situation seems to be changing, considering the current problem with single-use plastic bottles and the fact that aluminum is the most sustainable beverage packaging. This metal is light and strong, easily transported, and can be recycled almost indefinitely in a true closed-loop process, which makes its production very efficient in terms of material utilization and energy consumption. Considering that plastic bottle trend is not sustainable in many aspects, the increasing number of beverage manufacturers are starting to see aluminum cans as a good alternative to single-use plastic. There are a couple of companies that have already caught on this wave and started producing canned water. More and more people are becoming aware of current unsustainable practices and how polluted the environment is becoming. California, New York, and many other states are becoming increasingly active in terms of sustainability and searching for environmentally friendly solutions. This list of most and least recycled materials ought to change very soon, at least if we intend to live and grow food in an environment that is clean and healthy. One of the biggest setbacks of the current recycling process in the US is the lack of proper waste management, in terms of adequate waste disposal and organization of curbside services. To increase rates and make recycling more efficient, it is of utmost importance that the trash is separated and disposed of properly at the start of the process. This makes pre-processing less costly and the whole recycling process quicker, easier and more cost-efficient. Although this might seem like a very small change initially, don’t forget that a great number of small positive changes always result in something great. Germany’s ecological policies are the living proof that this way of thinking can do wonders. Want to read more? Visit simplyeco. Original Price $19.95Current Price $17.95 8 set environment friendly bagsSimply Eco reusable produce bags help pave the way for a better future with environmental awareness and better care ...View full details 9 set Еnvironment friendly bag + pouch bag with metal clip Simply Eco help pave the way for a better future with environmental awareness and better...View full details We at Simply Eco, strongly believe that is our personal responsibility to put an end to the damage the plastic is causing on our world. Our main ai...View full details 6 set environment friendly bagsSimply Eco reusable produce bags help pave the way for a better future with environmental awareness and better care ...View full details
The largest adopter of wind energy will be China, where wind turbines will generate almost 2000 TWh a year by 2050, compared with virtually nothing now. OECD Europe will be second-largest regional market, at more than 1000 TWh/a from wind. The global output from wind turbines will be 5000 TWh by 2050, the roadmap estimates. Solar electric photovoltaic (PV) will generate another 11% of global electricity by mid-century, reaching 4500 TWh/a, of which half will be from the residential sector. The predictions were made by IEA executive director Nobuo Tanaka in a presentation to EU energy leaders at a meeting in Seville, Spain. He discussed IEA’s 450 ppm climate scenario, implications of the recent COP15 climate summit in Copenhagen, and the IEA energy technology roadmaps “The financial crisis has halted the rise in global fossil-energy use, but its long-term upward path will resume soon on current policies,” he explains. “Tackling climate change and enhancing energy security require a massive decarbonisation of the energy system." Energy efficiency is the low-hanging fruit but energy technology is the key towards meeting 2050 goals. Demand for natural gas continues to grow in any future scenario, peaking by 2025 in the low-carbon scenario, and Tanaka says gas can play a key role as a bridge to a cleaner energy future An additional US$10.5 trillion of investment will be needed to reach the 450 scenario, and every year of delay in progress will add US$500 billion to mitigation costs. Renewable energy, nuclear and generating plants fitted with carbon sequestration will account for 60% of electricity by 2030 in the 450 scenario, up from less than one-third today, he adds. Improvements to the internal combustion engine and the uptake of next-generation vehicles and biofuels will lead to a 56% reduction in new-car emission intensity, but non-OECD countries will account for 93% of the increase in global energy demand by 2030, driven largely by China and India. Energy subsidies among the largest 20 non-OECD countries reached US$310bn in 2007, creating “an unsustainable economic burden and exacerbating environmental effects.” While in Spain, Tanaka visited the Abengoa Solar's PS20, a 20 MW concentrating solar power plant.
Oil Water Separators are an extremely beneficial and effective tool in getting rid of oils, hydraulic fluids, fuels or petrol based products from water. They usually have no moving parts and work on the basic fact that oil floats on water. An oil water separator is actually a device that is made to remove oil and solids from water to avoid harm to the environment. Its procedure is based on the main difference involving the weight of oil and also the weight of water. Engineers and researchers refer to this weight of oil for each particular amount of volume, specific gravitational forces. When an oil water separator is working properly the oil will rise for the top, the wastewater are usually in the center as well as the solids will be on the bottom. The oil is skimmed from the top and reclaimed or reprocessed, the solids are scraped off of the bottom and the wastewater is brought to a treatment middle for final treatment. The key to splitting oil from wastewater is increasing the dimensions of the oil droplets. The greater the oil droplet, the faster it will increase to the surface area and the more oil can be eliminated. There are numerous ways to increase the size of the droplets, but the most common is allowing gravitational forces act in the oil normally. The problem is that due to motion of the oil droplets with the water, many of them establish a fixed charge that repels other droplets. In order to stop the build-up of these repelling fixed charges, the rate from the wastewater moving with the separator must be confined to 3 feet per minute. There are other things which keep the oil water separator from being the most effective solution to cleaning up oil and petroleum dependent splatters through the wastewater surface area. For example, the normal oil water separator is not going to remove oil that is stopped in wastewater so it must be taken care of yet another time before coming out to the atmosphere or in to the sewer system. When oil is suspended in wastewater or salt water, it is very difficult to eliminate quickly and effectively. The process is far more complicated, costly and time consuming. With this website, we’ll concentrate on the common types of oil water separators available for the majority of individuals who need to get a single. There are various types of separators used today, but the most common are gravity type separators and collecting (or coalescing) plate separators. Gravitational forces kind separators depend on the truth that oil floats on wastewater since the basis for the way that they run. The wastewater/oil mix is put in to a tank which is broad enough, long sufficient and deep sufficient to provide a wide, quiet spot in the tank in which oil can increase for the surface area to get collected and removed. This type of separator needs a big volume of water and fails to eliminate enough oil to satisfy government guidelines for release into the atmosphere. It needs to be part of a multi-step system that includes a treatment middle downstream through the separator before the wastewater is launched to the environment. An additional negative of this kind of separator is the fact it needs to be as much as 5 times larger than the gravitational forces collecting plate separator in order to work properly. Gravity Kind gathering plate separators These separators work essentially the same as the gravity kind except they introduce dishes inside the stream of wastewater that give the oil a spot to build up before it floats approximately the outer lining. The potency of this kind of separator are directly associated with the region and number of dishes inside the outflow stream. The more area and the greater the number of plates, the better efficient the separator is. As there is much more surface area when compared to a gravitational forces type separator, the gravity plate separators are far smaller sized. The Gravity Dish Separator (also known as coalescing dish separators) are extremely great at getting rid of oil that the wastewater leaving them satisfies federal government specifications without heading another treatment process. Neither the Gravity Separator neither the Gravitational forces Coalescing Separator require electrical power to operate. There are numerous stuff that can make an oil water separator not work like it must. Not sustaining or incorrectly maintaining Not developed really well Incorrect separator selected to do the job Failure of individuals running the separator to comprehend the way that they work Caused by an oil water separator not working properly is that oil is not removed from the wastewater and it’s released towards the environment at greater concentrations than is acceptable per federal government regulations, causing fines and toxic contamination of sewers, channels, ponds and rivers. Essential Things to Consider When Choosing An Oil Water Separator Easy tanks with baffles that count on gravitational forces to recover oil aren’t extremely effective in removing an adequate amount of it to meet federal government requirements. This is because these devices only remove the oil that is hovering at first glance and not the oil which is stopped below the surface area. There are numerous things you need to consider when choosing an oil water separator. Below are a few regular specs you can use to get you going on your buy of the best oil water separator for the distinctive scenario: Figure out the limits that this federal government has established for the volume of pollutants within the water you release. Make sure the model you are looking for can fulfill that requirement. Determine the size of the unit you need depending on the quantity of pollutants and the quantity of water you should procedure. When you purchase a device that is not big enough, you will discover your self working with regulatory agencies because you are out of conformity with regulations. The type of contaminant you are trying to remove will determine which model you get and exactly how effective it will likely be in doing its job for you. The viscosity from the fluid you need to eliminate may also decide which device you will need. Most oil water separators work extremely well with reduced viscosity products that float on top. It becomes gradually harder to recoup contaminants which have the consistency of cake blend. Whether you need a permanent solution or perhaps a short-term off-the-rack separator. The benefit of utilizing an out of the box device is that if your requirements change, its not a significant cost to upgrade of modify your separator for that new demands. Most oil water separators are made to eliminate oil from water if the oil is floating on top from the water. You will find not many gadgets that can successfully and efficiently eliminate oil if the oil is mixed along with the water. I know you’ve noticed that oil and water don’t blend, however they can. In technology and chemistry they consider it an emulsion. Emulsion occurs when liquids that normally don’t blend are blended with each other by some other force like a mixer. Well developed oil water separators including those marketed by Freytech let you take away the wastewater and oil without having to have it motivated out with a septic company. The only time you should contact a pumping company happens when the solids that are captured develop towards the bottom in the separator in which they begin to start out to impact the procedure of the separator. The key in separating water and oil is finding a method to increase the dimensions of the oil drops therefore they drift quickly for the surface. There are several ways to increase the dimensions of the oil drops. The vast majority of oil water separators will likely be used in controlled surroundings at industrial and commercial fixed locations. When you begin to think about utilizing a separator for oil drip recovery, the entire video game changes. Listed here are the minimum requirements for oil water separators used in oil spill recovery as suggested by the US Coast Guard and the Marine Drip Reaction Company: A processing capability of 250 – 500 gallons a minute or 357 – 754 barrels each hour The device ought to be fairly light-weight, compact and simple to transport with weight in the range among 4000 – 6000 lbs A dimension that can easily fit in a 5 feet x 5ft by 5ft region therefore it won’t wrqsgq a lot space on a deliver or boat. The ability to process/individual fluids within a wide range of viscosities from cooking oil kind regularity to warm molasses consistency. 1,500 CTS – 50,000 CTs. Provide water clean sufficient after handling not to surpass 15 parts for each million (ppm)so it can discharged immediately back into the This should offer you a great understanding of how oil water separators work and an idea of what you should search for when buying a single for the application.
The notions of what can be termed as either socially responsible, sustainable or green supply chains have obvious prior history among multiple industry supply chains or networks. On this Supply Chain Matters blog, we have highlighted multi-industry supply chain efforts within these areas dating back to our inception in 2008. Some have stemmed from prior negative history with customer, governmental and watchdog groups providing visibility to unacceptable practices. They unfortunately came about from a quest to seek out lowest cost global sources of raw materials, components and end products. Lowest cost sourcing or manufacturing sometimes did not equate to socially or environmentally responsible practices. The lessons learned amounted to negative customer perceptions of certain brands; perceptions linked to the exploitations of human labor as well as our global environment. Increased global citizen awareness to the overall threats of global warming have considerably changed as-well. The United Nations has identified three sectors, Energy, Agricultural Practices and City Infrastructure that are the most critical toward achieving GHG emissions reduction, since each is responsible for a major portion of such emissions. “Trends in temperature readings from around the world show that global warming is indeed taking place. Every one of the past 40 years has been warmer than the 20th century average. 2016 was the hottest year on record. The 12 warmest years on record have all occurred since 1998.” Further noted: “As CO2 levels increase, the pace of warming accelerates. Satellite measurements confirm that less heat is escaping the atmosphere today than 40 years ago. Though other heat-trapping gases also play a role, CO2 is the primary contributor to global warming.” The Supply Chain’s Role When companies establish multi-year sustainability goals, they invariably require a focus on a company’s supply and customer demand networks, since product value-chains and supporting services are responsible for a considerable portion of carbon and greenhouse gas emissions. Consider the carbon emissions footprint of logistics and transportation, manufacturing, agricultural production, generation of paper-based documents along with reducing waste across all processes. Led by high profile corporations and smaller, responsible minded businesses, meaningful change is occurring. Many Consumer Products and Food producers such as Procter & Gamble, Nestle and Unilever and others are recognized for their wide-reaching efforts for incorporating sustainability in business strategy. Beverage companies such as Coca Cola, PepsiCo and SAB recognize that large consumption of water is a critical component of a sustainability strategy. They have appointed senior managers responsible for water conservation and sustainability initiatives that ensure supplies of water are continuous. High profile manufacturers in the high tech and consumer electronics sector such as Apple, Dell, Hewlett Packard and others have always been on the forefront of sustainability initiatives. Across various other industries, innovators have been openly active and committed to sustainability efforts because it drives meaningful benefits. New initiatives have increasingly been spawned from committed corporate sustainability goal setting. A survey among U.S. Fortune 1000 CEO’s and C-Suite executives conducted in 2018 by Covestro LLC, a producer of high-performance polymers found that 51 percent of executives believe there is inherent tension/conflict between a company being profit-driven or purpose-driven. However, most (69 percent) indicate that the act of balancing profit and purpose is having a positive, transformational impact on business. Nearly 80 percent indicated that a company’s future growth and success will hinge on a value-driven mission that balances profit and purpose, with three-quarters (75 percent) believing these companies will have a competitive advantage over those who do not. Finally, but not least, a significant 86 percent confirm that today’s top talent is more inclined to work for companies that have a demonstrated commitment to social issues. Indeed, the good news is that much more positive change is occurring, changes that relate to visible proactive corporate social responsibility efforts that directly link to corporate business objectives and enhancement of brand value. Sustainability efforts ensure strategic continuity of commodity supply, raw material and natural resources. They confirm that a business has plans and strategies that can support long-term competitiveness and industry leadership. On the product demand side, consumers and business customers are much more attuned toward seeking out components and products that are both socially responsible and environmentally sustainable. As an example, today’s Millennial generation cares about their environment along with social values, and, in-turn, factor buying, and loyalty decisions based on the reputation of the particular company or brand. Consumers further expect, and in increasing frequency, demand visibility as to where products such as food, drugs or raw materials were sourced in the supply network. The New Emphasis of Ethical Supply Chain The terms ‘green’ and ‘sustainable’ are often used interchangeably, and now, a new term has emerged, the ‘Ethical Supply Chain.’ This term takes on a broader umbrella and focus across extended supply networks. It includes efforts from manufacturers, retailers and services providers as meaningful extensions of corporate purpose, to leverage advanced technology for enabling broader social, sustainable and ethical practices. In my recent travels among technology conferences and in discussions with supply chain leaders, I have noticed that technology providers are as-well, becoming more sensitive to applying technology to assist companies in eliminating overall waste of important resources and in providing global social agencies access to advanced technology to better enable social, environmental and ethical responsibility needs. At the recent Oracle OpenWorld conference held in September, an applications keynote placed emphasis on efforts to assist Industries for the Blind and Visually Impaired (IBVI). More than seven million US adults are blind or visually impaired and an estimated 70 percent of them are not employed full-time. IBVI employs people who are blind for a wide range of jobs, from assembly of tool kits for military troops to various customer service and office roles. IBVI is always seeking ways to improve product quality and accuracy around factors such as shipment status and inventory. Unlike most companies, however, IBVI is not looking to cut its labor costs. Its mission is to create opportunities. At this week’s Kinaxis customer conference, in his CEO keynote address, Jon Sicard emphasized the sustainability purpose and role of that technology provider, namely to assist customers to enable transparency across the supply chain in time, opportunity and resource dimensions which can be key elements to efforts in eliminating overall waste. The most emphatic emphasis came at the OpenText Enterprise World conference held in July. In his mainstage keynote address, CEO Mark J. Barrenechea specifically shared remarks challenging more tech companies to increase their applications of technology for broader social and ethical needs. Cited was this Enterprise Information Management provider’s efforts to assist the United Nations Refugee Agency (UNHCR) to leverage the use of biometrics technology to register and uniquely identify displaced people, ultimately providing access to life-saving aid. The International Committee for the Red Cross (ICRC) is leveraging OpenText’s B2B technology platform to better assist agency employees to focus on responding to emergencies or helping those affected by armed conflict. In the specific focus area of enabling Ethical Supply Chains, an on-stage demo illustrated how companies can soon leverage the OpenText Business Network Global Partner Directory. This directory lists more than 800,000 trading partners and services providers to identify specific suppliers that conform or demonstrate certain ethical supply chain attributes. Indeed, positive change is occurring, changes that relate to visible proactive corporate social responsibility efforts that directly link to corporate business objectives and enhancement of brand value. The umbrella of CSR will likely increasingly include notions of enabling more Ethical Supply Chains by the leveraged use of advanced technologies such Internet of Things (IoT), Blockchain, JAWS, bots, and others. The significance of one of the major Cloud based B2B Business Network providers commitment to this goal is noteworthy because it opens up end-to-end primary and secondary supply network transparency across multiple-tiers of a product’s value-chain. From our lens, that can significantly springboard CSR and Ethical Supply Chain identification, conformance and adoption. © Copyright 2019, The Ferrari Consulting and Research Group and the Supply Chain Matters® blog. All rights reserved.
We usually don't think of a fire destroying our business or office building. Typically, we're worried about less concrete threats such as a downward trending economy, high customer churn rate, the struggle to retain talented employees, legal issues, etc. But the fact is that, between 2004 and 2013, nonresidential building fires killed 65 people, injured 1,425, and created $2,461,400,000 in damage. Learn what your business can do to prevent the most common ways that commercial building fires start. When Commercial Fires are Most Common In order to prevent a dangerous and costly fire, let's take a look at when building fires are most likely to happen. This way, you can know when your business is most at risk, and what steps you can take to mitigate that risk. The time of day fires are most likely to occur in your building depends on what type of business you're in - an office building is a bit different from a manufacturing plant. For example, in office buildings, fires are most common during regular work hours. Incidence of fires peaks in the early evening hours between 3 - 6pm. Outside of the work day between 7pm - 7am, fires are 31% less common. Additionally, there are not many fires on the weekend days when office buildings are not in use. Most Common Causes of Commercial Building Fires 1. Cooking Fires 29.3% of nonresidential fires from 2013 were cooking related. In fact, 1 in 4 office building fires were related to cooking equipment. These fires tend to account for less damage, but are easily preventable with fire protection systems such as alarms and fire extinguishers. The second most common cause of nonresidential fires were intentionally started. This accounts for almost 10% of fires, and tends to cause the most damage. Intentional fires also result in more civilian injuries and deaths. Unlike cooking and heating fires, it's most common for intentional fires to be started between 3pm and midnight. A few common locations in your building to be aware of include: - Trash bins - Open areas like a lawn or field 3. Careless Acts and Human Error 9.2% of commercial fires were unintentional results of careless acts. This is somewhat of an 'other' category. A few examples of careless acts that result in fires: - Accidentally leaving space heaters or other heat producing equipment on - Carelessly discarded cigarette butts igniting fires - Plugging too many things into the same extension cord 4. Heating Fires Heating fires account for 9% of all nonresidential building fires. Central heating units, fireplaces, water heaters, and other heating appliances and systems should be regularly inspected to prevent fires. It's important to move any flammable materials and furniture away from heat sources, especially in the winter months when the heat is turned on. Here are the top 5 causes of commercial building fires represented graphically by the 2004-2013 report by the U.S. Fire Administration: How to Prevent a Nonresidential Fire Run through the items below to see how protected your business is from a fire emergency. There might be something you're missing that could save you thousands of dollars in fire damage and loss. Fire Suppression and Protection System in Place - Fire extingusihers - The top cause of commercial building fires is cooking fires, and thankfully, most of these fires are small and contained. A fire extinguisher placed near the kitchen area can give employees the power to stop a small cooking fire from spreading. Make sure your employees are trained in using a few extinguisher. Read this if you're unsure whether or not your fire extinguishers are still in working condition. - Fire alarms - This may sound obvious, but fire alarms are easily overlooked or left with dead batteries because they're not properly maintained. Not only is a fire alarm system required, but it can save lives and property from damage. - Commercial fire sprinkler system - A sprinkler system can squelch a potentially deadly fire. The NFPA has no record of a fire killing more than 2 people in a building that was completely sprinkled. Having only 1 or 2 fire sprinkler heads can contain the majority of fires. Testing and Maintenance So you have all of the necessary fire protection systems in place. But how well maintained and up-to-date are they? Do you have expired fire extinguishers? Are you following the legal state requirements for getting these systems tested regularly? Make sure you check the local fire code to see what is required of businesses, and set up regular inspections and maintenance. Install a Commercial Alarm System No ones likes to think that there are people who would intentionally start a fire on their property. Unfortunately, this is a scenario that you need to be prepared for. Invest in a security alarm system that will detect any suspicious activity in the evening hours when the building is most at risk. Keeping the area well-lit at night, or installing motion sensor lights outside, can deter criminal activity. You and your employees will appreciate the feeling of safety and security. Many companies also offer alarm monitoring services to ensure that your business is protected at all hours of the day and night. Hopefully you now have the information that you need to gather the products and services your business needs to protect itself from a crippling fire. Best of luck taking this important step!
common myths about recycling MYTH: Most Americans recycle all they can. Research shows convenience and commitment are required for maximum recycling. For instance, is there more than one location in a household to store recyclables? If not, recyclables in areas other than the kitchen get thrown away. Additionally, is there only one committed recycler in a household (usually) the person who picks up after everyone)? If so, studies indicate making this a family/partner affair, where everyone participates, allows the most recycling of the right materials. MYTH: The recycling arrows (Mobius) on a container mean it is recyclable at a Material Recovery Facility (MRF). ANSWER: Only in some cases Manufacturers strive to get eco-friendly information on their product labels. It sells. The FTC requires that a product have at least 60% access to local programs (like Material Recovery Facility processing) across the U.S. to include the Mobius on their products. However, the Mobius is not a reliable indicator of whether something gets recycled. There are thousands of plastic products and packaging, and each one has its own unique chemical recipe. Many plastics cannot be made into new products at this time. Recycle plastics by shape: bottles, jars, jugs, and tubs. Identify the myths of recycling and become an expert. MYTH: Containers must be squeaky clean in order to be recycled. While all bottles, cans, and containers should be clean, dry, and free of most food waste before you place them in your recycling container, they don’t need to be spotless. The goal is to make sure they are clean enough to avoid contaminating other materials, like paper. Try using a spatula to scrape cans and jars, or using a small amount of water and shake to remove most residue. MYTH: Hoses, tanks, shower curtains, swing sets, etc. are made of plastic, so they must be recyclable. If it’s not “bottles, cans, or paper” it probably doesn’t belong in your curbside mixed recycling cart and may even require special handling. Just because an item is made from plastic, or contains plastic parts, doesn’t mean recycling facilities can handle it. There are other resources (e.g., Earth911.org) that can help answer questions about what to do with non-recyclables or household hazardous waste. MYTH: All types of glass bottles and jars are recyclable. ANSWER: Varies by jurisdiction Glass recycling varies by jurisdiction. Glass collection varies widely in communities across the U.S. Some communities collect glass at drop off locations only, some collect glass separately at the curb or with other containers, and many include glass with all other recyclables. Bottle bill laws in CA, CT, OR, IA, MI, ME, VT, MA, NY allow for the return of a per-bottle deposit when bottles are returned to return-centers or retailers for recycling. Greater Greenville Sanitation does not accept glass as part of its curbside program. MYTH: Aerosol cans are acceptable in the recycle bin. ANSWER: Varies by jurisdiction Most of recycling programs accept empty/dry aerosol cans. Aerosol cans without the caps are recyclable if they are empty/dry. If they are not, then they could be dangerous. Some fires are caused in baler chambers from trace amounts of can chemicals, and cans have been known to become projectiles when densified/ baled if propellant is still present. Waste Management facilities and commodity vendors accept steel, mixed metal, and aluminum aerosol packages. Multi-material aerosol packages are not recyclable. There is no gray area here from a processor standpoint. However, some cities still list aerosols on their no-recycle lists. C Greater Greenville Sanitation does not accept glass as part of its curbside program. MYTH: All recyclables sent to a recycling facility are recycled. On average, about 25 percent of the stuff we try to recycle is too contaminated to go anywhere but the landfill, according to the National Waste and Recycling Association, a trade group. MYTH: Recyclables need to be washed, which wastes water. Don't waste water! Recyclables only need to be emptied and dried. Rinse quickly only if soiled. ©2019 WM Intellectual Property Holdings, L.L.C. The Recycle Often. Recycle Right.
Jeff Herrin, Director of Advanced Programs at Sauer-Danfoss Inc., Ames, IA, says commercial products based on this technology are in active development and will be available in specific markets in the coming years. A Digital Displacement pump (DDP) is a hydraulic pump which uses the same core piston pumping principles as many of the commercial pumps currently available in the market, says Herrin The difference, however, is in how the DDP is controlled. Output flow of a DDP is controlled by fast-acting electrohydraulic valves paired with each cylinder and piston in the pump. says a nine piston DDP, for example, will have nine active valves, one per piston, providing control of the pump. Traditional pumps, on the other hand, control output flow by varying the angle of a single swashplate which controls all of the pistons in the pump simultaneously. "The electrohydraulic valves which control the pistons have to be very fast in order to provide the pump dynamics needed for most applications of hydraulic pumps," adds Herrin Designing valves that are both very fast and able to pass a lot of flow through the valve body is not an easy task. "Those two requirements at the same time are a real challenge," he In addition, the large number of valves needed for every DDP makes it critical for developers of DDP technology to ensure reliability and robustness of the design in any kind of production environment. "But by resolving the challenges, there comes a lot of new benefits," says Herrin Why go DDP? According to Herrin , improved operating efficiency is one of the biggest benefits to using DDP technology. "Digital Displacement pumps themselves, when compared to traditional pumps, offer significant improvements in efficiency, especially at part-load operating conditions," he Because they provide very fast, dynamic pump control, DDPs offer efficiency at a component level as well as a system level, which leads to overall machine efficiency benefits such as improved fuel economy. The faster control response of a DDP enables it to provide more precise and repeatable flow and pressure control for applications that require it, such as robotics. Currently, many of these applications use high fidelity valves to provide precise control. By using a DDP instead, the high fidelity valve can be simplified which can help reduce costs for customers. says the potential cost reduction is not a matter of DDPs costing less than traditional pumps but rather their ability to help reduce overall system design costs. When a DDP is used in place of a traditional pump, other components can be simplified or even removed from the system without sacrificing functionality. "It's more of the system level bill of material and lifecycle costs comparison that become important and where benefits will be obvious to the OEM and also the end customer," says Herrin Because DDPs are such an efficient technology, they are also well suited for use in hydraulic hybrid drivetrains. "Due to their precise controllability and excellent operating efficiency, Digital Displacement pumps and motors are actually the ideal technology for hybrid drivetrains," says Herrin Hybrids are designed to provide total operating efficiency of the machine over the duty cycle, and in order to achieve that, the most efficient components must be used. says at the moment, Digital Displacement is the most efficient pump and motor technology known to the industry.
Many people ask for explanations about the difference between transport and logistics, and while the two are highly intertwined in terms of application, we can distinguish between them. Transportation basically involves the ‘movement’ of commercial goods from point A to point B. Logistics, however, refers to that process that includes the planning and implementation of several management aspects to aid the smooth movement of goods along the supply chain. Logistics helps a company handle everything from packaging, loading (or unloading), product inventory, shipment (or transportation), to warehousing (be it short-term or long-term storage). Logistics as a service has evolved to become one of the most integral sub-sectors within supply chain management. Transport, on the other hand, is very crucial to logistics. Let’s picture this: you have a huge party coming up this summer, and so you approach a friend to help you organize it. I bet one of the questions to likely pop up would be: “What are the logistics?” At this point, you explain all your plans and how you expect to execute the steps, all the way to the last episode on the D-day. Now, if the planning involves the movement of goods and people, then one of the considerations would be how to handle the transportation part of it. In short, what would be your ‘transportation logistics?’ Basically, if you work for a company in the transport and logistics industry, your education and training should have prepared you for two things- both of which are crucial to the industry. One is to have a thorough understanding of the various modes of transport, and two is to use your skills to plan and execute detailed logistical operations. The goal is to optimize the use of resources within supply chain management.
Koszalin, German Köslin, city, Zachodniopomorskie województwo (province), northwestern Poland, on the Dzierżęcinka River. Koszalin is a resort and manufacturing city; local industry includes timber milling and woodworking, food processing, and machine works. First chronicled in 1214, Koszalin received municipal rights in 1266. It became an important trade centre through its position on the Gdańsk-Szczecin trade route and built its own merchant fleet. The bishops of Kamień and the Pomeranian dukes made it their headquarters during the 16th and 17th centuries. Koszalin was acquired by the electorate of Brandenburg in 1648 and was not returned to Poland until 1945. Pop. (2002) 108,709.
Tilt-up construction traces its roots back as far as the early 20th century when Thomas Edison built tilt-up residences for his lab technicians in Menlo Park, New Jersey. Then during the 1920s, architect and contractor Robert Aiken constructed platforms to cast panels, embedded pre-cast architectural elements in the fresh concrete, raised the panels with screw jacks, and supported them with a series of wooden braces. Then, the Great Depression struck and hurt the tilt-up industry. Very little tilt-up construction was performed because no one was interested in its labor saving benefits. A dramatic commercial advancement developed after World War II. Contractors in southern California began using plywood as a form and a bond-breaker between the floor slab and the surface. They then simply attached a jib and winch to tilt the panel to its vertical position. The tilt-up method spread quickly up the West Coast and into Canada from 1955 to 1970. Concrete manufacturers developed products specifically for tilt-up, including lift hardware and pipe braces, while the chemical industry improved bond-breaker products. In 1970, the Portland Cement Association (PCA) developed design tables allowing panels to be load bearing. The rapid expansion of tilt-up construction induced the formation of the Tilt-Up Concrete Association (TCA) in 1986. The organization represents the interests on code bodies, educates the design and construction professions, and promotes the concept to building owners.
From solar, wind and nuclear to clean coal, oil and natural gas, says John DuPont, the solutions to the world’s growing demand for energy are as numerous as they are diverse. But they all share a common requirement—a sound understanding of the principles that underlie the joining of materials. The steel towers in wind mill farms are welded, as are the pipelines carrying gas and oil. The boilers in coal-fired power plants require welding expertise, as do lithium-ion car batteries, offshore oil rigs and nuclear power plants. DuPont, professor of materials science and engineering and associate director of Lehigh’s Energy Research Center, has played a major role in the creation of a new national research center devoted to the welding and joining of materials used in the energy industry. The Center for Integrative Materials Joining Science for Energy Applications seeks to extend the service lifetime of welds in the existing energy infrastructure and to increase the efficiency of the advanced welding materials used in new infrastructure. More than $5 million in funding from NSF, NASA and industry The new center is a collaboration involving Lehigh and three other universities (Ohio State, the University of Wisconsin-Madison, and the Colorado School of Mines), three national laboratories (Los Alamos, Oak Ridge and Idaho), the National Aeronautics and Space Administration (NASA), and 16 industrial companies that make materials with energy applications. The National Science Foundation (NSF) last month approved the center as an Industry-University Cooperative Research Center (IUCRC). The center will receive more than $5 million in funding over the next five years from the national labs, the partner companies and NSF, and it will have the opportunity to reapply for funding when the five years are completed. The need for improved welding materials and technologies, says DuPont, is driven by the fact that new power plants, whether nuclear or fossil-fuel, are designed to operate at higher temperatures and under greater pressure. These two factors enable power plants to run more efficiently but they impose greater demands on welded joints. “Every time we come up with new welding materials, we have to minimize the adverse effect that joining can have on the materials’ performance. A weld is like a weak link. Its performance influences the performance of the overall material.” Ohio State is the lead university in the new research center. Lehigh and the two other schools are also research sites. The center’s members will work on different projects and share research results. Research topics will be chosen by industrial companies. DuPont has spent 20 years investigating the welding and joining of materials and is particularly interested in the welding metallurgy of the nickel-based alloys that are used in power-generation applications. He supervises five graduate students and has published more than 230 technical articles, in addition to a recently completed textbook on the subject. His previous awards include NSF’s CAREER Award and also its Presidential Award, which is the highest honor granted by the U.S. government to young scientists and engineers. Story by Kurt Pfitzer Posted on Tuesday, October 26, 2010
Develop a portfolio of measures across the campus. Research campuses consume more energy per square foot than most facilities. They also have greater opportunities to reduce energy consumption, implement renewable energy systems, reduce greenhouse gas emissions, and set an example of climate neutrality. This Web site provides research campuses a five-step process to develop and implement climate action plans. The process follows a logical hierarchy of actions to evaluate options by energy sector and set specific targets. It encompasses every energy system on campus, recognizing that campus-wide measures have greater potential for reducing carbon emissions. Use the Climate Action Planning Tool to determine which technology options will have the most impact on your campus. The National Renewable Energy Laboratory (NREL) developed this Web site with support from Labs21—a joint venture of the U.S. Department of Energy (DOE) Federal Energy Management Program (FEMP) and the U.S. Environmental Protection Agency.
As we have explored the history and the emerging role of biofuels in the transition to a sustainable economy, we have seen that, in many ways, it represents a return to the roots of American industry. The same thing can be said as we expand our scope to include other plant-based products that form the building blocks of the newly re-emerging bio-based economy. Indeed, before petroleum became widely available, driving prices down, most of our products were plant-based, derived, not only from lumber and cotton, but also corn, soy and sugar beets, which were used to make chemicals, paints, construction materials, clothing, and other household materials. Today, a staggering number of the products we consume contain some amount of fossil fuels, if not in their makeup, then in their production. I’m not just talking about fuels here, either. Many everyday items ranging from plastics, to preservatives, to fertilizer, from nylon to polyester, are derived from oil. What if most, if not all, of that could be replaced with fuel and materials derived from biological sources? That is the premise and the promise of a bio-based economy. The idea never really went away, especially among long-range thinkers like Henry Ford, who in 1941, showcased this super-strong plastic car (see video) no doubt in response to the steel shortage brought about by the war. The car could also run on ethanol. Now, not only have gas prices gone back up, but we have even better reasons to avoid fossil fuels, including a sustainability movement that strives for an economy that emulates nature in its means of production. “Mother Nature looks at greenhouse gases as a feedstock. It’s what trees, plants and huge structures like coral reefs are built from.” So says a video (below) from NatureWorks, a subsidiary of Cargill in describing the bio-based materials they produce. According to USDA, bio-based products are “commercial or industrial products, other than food or feed, that are composed in whole, or in significant part, of biological products or renewable agricultural materials (including plant, animal, and aquatic materials), or forestry materials.” Public benefits that can be expected from such a transition would range from national security, increased economic demand for farmers, industry, rural communities, as well as environmental benefits at the global, regional, and local levels. The USDA founded the Alternative Agricultural Research and Commercialization Center (AARCC) in 1992 to encourage investment for the development of “new non-food and non-feed products made from agricultural/forestry commodities.” They recognized that this was a good investment, since the markets they create will ultimately reduce the need for agricultural subsidies as well as reducing the need for imported oil. The agency created a label in 2011 with the intention “to promote the increased purchase and use of biobased products,” with the idea that doing so would: - provide opportunities to boost domestic demand for renewable commodities - create jobs - create investment income Products can qualify for the label with a minimum of 25 percent bio-based content, unless otherwise specified by the standard. In the first year since its inception, over 500 products were certified to use the label. Unfortunately, the funding for the labeling program is currently stalled, awaiting the passage of a new farm bill. The National Research Center’s 2000 report “Biobased Industrial Products: Priorities for Research and Commercialization” projected liquid fuels growing from 1-2 percent to 50 percent by 2090, biobased organic chemicals growing from 10-90 percent and biomaterials growing from 90-99 percent over the same timeframe. So why haven’t we heard more about this? Although there are significant activities going on here, it seems to be more prominently discussed in Europe. The OECD has come out in support of industrial biotechnology, focusing primarily on policy issues. They claim that, “Obtaining the full benefits of the bioeconomy will require purposive goal-oriented policy. This will require leadership, primarily by governments but also by leading firms, to establish goals for the application of biotechnology to primary production, industry and health; to put in place the structural conditions required to achieve success such as obtaining regional and international agreements; and to develop mechanisms to ensure that policy can flexibly adapt to new opportunities.” EuropaBio – the European Association for BioIndustries, maintains bio-economy.net where they describe developments in the areas of: food and feed, bio-materials, biofuels, enzymes, chemicals, and biorefinery. There are a number of reasons why the prospect of a bio-based economy is compelling. One is the sheer volume of raw material that can be made available, benefiting many local economies, since biomass, unlike fossil fuels, can be produced anywhere. By 2030, an estimated, 914 million tons of biomass residue will be available from the following eight regions: China (221), U.S. (180), Brazil (177), EU (151), India (110), Argentina (39), Mexico (20) and Australia (16), most of it as a by-product of food production. This figure represents 17.5 percent of the total produced, since most of the residue (75 percent) is left on the ground to nourish the soil. Still, this is enough to provide for half of the world’s projected gasoline needs at that time. Of course, this will require modification to automobile engines since those being produced today can only accommodate a maximum of 15-20 percent ethanol blended with gasoline. But there is no reason why cars can’t be built to run entirely on ethanol. Brazil has being building them since the 1970s. In fact, the original Ford Model T did just that. Novozymes estimates that the bio-based economy will create 8 million new jobs, consisting of 4.6 million in construction, 1.8 million in operations, 0.9 million in collection and 0.7 million in transportation. That doesn’t include additional farm jobs. Biotech solutions are key element in many aspects of this new economy. Special blends of micro-organisms can improve yield and reduce environmental impact. Enzymes can help utilize the raw material more efficiently. They can also be added to animal feeds to make them more digestible, break down biomass into sugars for ethanol and other fuel production, and be used to produce fuel from waste. The idea is spreading. Ford has recently begun using soy-based foam in its seat cushions that were previously petroleum-based. They are not alone. According to Bloomberg, there are more than 3,000 U.S. companies producing some 25,000 bio-based products. A recent report by the Center for Automotive Research (CAR) found that, “Bio-based materials have been tested and deployed in a number of automotive components. Flax, sisal, and hemp are used in door interiors, seatback linings, package shelves, and floor panels. Coconut fiber and bio-based foams have been used to make seat bottoms, back cushions, and head restraints. Cotton and other natural fibers have been shown to offer superior sound proofing properties and are used in interior components. Abaca fiber has been used to make underbody panels.” The biobased economy is coming and we should be happy to see it. While it might not be as glamorous as high tech solar or cutting edge wind turbines, it has enormous potential and we should not be surprised to see it carry a large share of the transition to a more sustainable economy. Think of it as another way to put the green in green economy. RP Siegel, PE, is an inventor, consultant and author. He co-wrote the eco-thriller Vapor Trails, the first in a series covering the human side of various sustainability issues including energy, food, and water in an exciting and entertaining format. Now available on Kindle. Follow RP Siegel on Twitter.
Why talk about energy design? Many of today’s building projects prove that minimising running costs is a smart thing to do already in the planning process The energy design of a building is the basis for its expected future dependency on fuel resources and consequently a considerable part of the expected future running costs. When building costs are discussed the argumentation is commonly limited to construction costs. With an increasing relevance of energy prices and more and more investors asking for certified buildings, operation costs and even costs for deconstruction become more and more relevant. Depending on the specific building parameters such as usage type and location, future costs can make up to 80% of the total lifetime costs of a building. A smart energy design does not only consider minimised heat loss via the building envelope so that very little energy supply is needed in operation, it also seeks to make best use of the building geometry, its thermal mass and how solar energy is efficiently made available for the building users.
Small and medium size enterprises (SMEs) in China consist of community enterprises (mainly owned by townships and villages), multiple cooperative enterprises, joint ventures, and individual and private enterprises. SMEs produce a significant share of China's GDP in a number of industrial sectors. In 1995, there were about 22 million SMEs in China employing 129 million people. SMEs in China face a number of constraints to engaging in technology transfer, such as for producing more energy-efficient products or investing in more-energy-efficient processes, including: Information. SMEs lack contact with technology manufacturers and customers so information about technology availability and customer demands is lacking. The evolution of industrial SMEs from non-sector-specific commune-based enterprises made SME rely on low-grade technologies and gave them little access to formal information and training channels. SMEs learn largely by visiting and copying other firms in the same sector. This constraint on information acquisition is especially true of what might be called organisational technologies such as project analysis, financial methods, or studies of market developments and factor price forecasts. SMEs have limited interchange with government ministries that might be in a position to advise them on technology choices. Rural customer demand. Rural customers show little appreciation for product quality (such as energy-efficiency). Competition is based solely on price and regulatory initiatives to promote product quality do not exist. In cases where some product quality standards do exist (i.e., minimum heat efficiency of bricks), they are usually not enforced. Even when customers do appreciate quality, they are often not able to pay for higher up-front capital expenditures because of severe capital constraints. And there are usually few marketing activities or product labelling initiatives to better inform customers, and encourage them to distinguish between higher quality products and lower quality products. Financing. SMEs do not possess the financial means to invest in more advanced technologies. On the other end, technology manufacturers are not in the position and other intermediaries do not exist to provide financial mechanisms encouraging technology supply push. Financial institutions are reluctant to lend for such investments to SMEs. Market competition: SMEs face little competitive pressure in their rural markets. All local producers operate under the control of the SMEs and local markets are highly segregated. SMEs are integrated into a spatial network of enterprises supplying largely to local markets and not in a product oriented network. For this reason, inter-local distribution networks are weak or not existing, and opportunities to exploit existing economics of scale in production are limited. Product pricing is somewhat arbitrary and an SME is not driven out of the market when its profitability is too low. So far, SMEs have no experience with market/competition based regulation.
Gelcoat is a chemical product composed of resin (polyester, epoxy, vinyl ester) and of a silica or micro-balloon filler. It is used to create an outer surface for laminates. Applying gelcoat is an important step in the fabrication of a laminated composite hull or deck. It forms the outer coating that will ensure an attractive, watertight finish. It also can be used to line a ship’s holds or tanks. Some versions are suitable for contact with food products. All shipyards handling composite materials use gelcoat. Most recreational sailors keep some aboard for touch-ups. Gelcoat is a mixture of resin and filler, or extender, often silica, sometimes paraffin. It is used to create a waterproof coating and can be impregnated with pigments. Since resins are not always compatible with one another, polyester, vinyl ester or epoxy gelcoat is chosen as a function of the laminate to be coated. The first two dry faster, but are not as waterproof as epoxy. Gelcoat comes in cans as a two-part product, the gelcoat and a hardener or catalyst. Spray-can versions are for touch-up. How to pick your gelcoat? The choice will depend on the nature of the laminated material to be coated. Epoxy gelcoat is the most expensive, but also the most waterproof. It is also harder to sand than polyester or vinyl ester.
What is Consensus Decision Making? Consensus is a decision-making approach that seeks to secure the support of the whole group for the decision at hand. Many people believe that consensus is the same thing as unanimous agreement, but this is not necessarily the case. Unanimity is when everyone agrees. Consensus is when no one disagrees. A specific definition of consensus may be spelled out in a team's ground rules or operating agreements. When the definition isn't clear in advance, facilitators recommend clarifying what consensus means during the meeting. The recommended definition is "Everyone can live with and will support this decision." This allows everyone to acknowledge that while the decision they're making may not be perfect, it is acceptable and the team can move on.
With more than 90% of its electricity generated from renewable energy sources and goals to reach 95% by 2014, Costa Rica is certainly one of the greenest countries on the planet. It also is on track to become the world’s first carbon-free economy. I recently returned from a 12-day tour sponsored by Global Renewable Energy Education Network (GREEN) and showcasing renewable and sustainable energy in Costa Rica.With this experience fresh in my mind, I thought I’d take this opportunity to share some of the educational highlights with Energy Currents readers. Costa Rica: A renewables paradise Mother Nature has greatly influenced Costa Rica’s commitment to renewable energy. The country is blessed by copious amounts of rainfall – most of the country receives more than 100 inches of rain per year. Thus, it’s no surprise that over 80% of Costa Rica’s electricity is generated by hydro facilities. The country also boasts considerable geothermal power as well as growing wind assets, solar, and biomass facilities. ICE’s role in renewables The Instituto Costarricense de Electricidad or ICE (pronounced ee-say) of Costa Rica is the state-owned electric monopoly that provides power to over 98% of Costa Rican homes. While many of the facilities that produce this power are ICE-owned, a small percentage is owned privately under rather non-traditional contracts. In many cases, these facilities are privately owned for a period of 15 years and are then handed over to ICE, which then owns and operates them. After a decade-long break from allowing such projects, ICE recently announced a plan to again accept bids for privately owned renewable projects (100 MW of hydro and 40 MW of wind). The plan intentionally aligns with Costa Rica’s goal of becoming a carbon-free economy. ICE also implemented a net metering program in 2010 whose goals were, again, to increase renewable energy production and thus the country’s energy independence. The pilot program also allows ICE to study the effects of distributed generation on its grid as well as to promote new renewable technologies. A countrywide commitment Costa Ricans are very proud of their renewable and sustainable efforts, which come at a premium price. Average residential rates are over 30 cents per kWh, and this may soon increase. Yet oddly, citizens are not likely to complain. The dedication to a renewable/sustainable society seems to be a shared goal, and the monetary cost of this commitment is widely accepted as are the variables that can affect it. For example, with such a large portion of electricity needs met by hydroelectric power, the country is hugely dependent upon rain. And in dryer years, as 2012 has so far been, ICE is concerned that it cannot generate enough supply to match demand. Less water means less hydro power is available. This means costs increase (since power must be purchased from other sources) and so does the amount of power generated from fossil fuels. The GREEN tour afforded unprecedented access to renewable facilities in Costa Rica. My group and I enjoyed guided tours of hydro facilities, a biomass plant/sugar cane refinery, a geothermal plant, and a wind farm. Not only were we inches from the equipment housed in these facilities (imagine access like this in the U.S.!), but also heard first-hand accounts of how such equipment is run and ICE’s unique perspectives on electricity production. While GREEN is currently focused on providing this experience to college-level audiences, Enerdynamics and GREEN are discussing a partnership where this unique opportunity could be available to business professionals. For more information, please contact me at [email protected].
Skip to Main Content Among most conventional incineration systems, the fluidized bed combustor (FBC) had been described as one of the most advantageous by providing simple operation with ability to accommodate low quality fuel as biomass, sludge and MSW with high moisture; reduced auxiliary fuel use; reduced operating and maintenance costs. This could only be achieved if optimal operating parameters are determined. This paper presents the methods and part of the findings of an on-going research aimed at optimizing the operating parameters that gives lowest emissions in the combustion of a fluff refused-derived fuel (f-RDF) in pilot scale fluidized bed combustor. The method adopt includes - cold fluidization studies in rectangular model column to determine the fluidizing velocity of the inert bed material (silica sand), and the effects of increasing fluidizing numbers on the mixing behavior of bed and fuel. This is closely followed by combustion study in the pilot scale FBC.
Study: the Energiewende does not need to wait for storage The development of wind and solar systems in Germany during the next 20 years does not require new power storage. The flexibility needed to compensate for weather-dependent power generation can be provided much more cost effectively. This can be achieved, for example, by the flexible operation of fossil power plants, demand side management in industry as well as power trading with neighbouring countries. Markets for storage technologies, however, will develop strongly in other sectors in the coming years - especially in transport and the chemical industry. The power system will be able to benefit from this. For example, as an additional benefit, batteries for electric cars can provide the electricity sector with added flexibility. These are the main results of the study "Electricity Storage in the German Energy Transition" conducted by four renowned research institutes on behalf of Agora Energiewende. "The energy transition must not wait for storage. For the next 15 to 20 years - that is, up to 60 percent of renewable energies - we will have plenty of other, cheaper flexibility options available", says Patrick Graichen, director of the think tank supported by the Mercator Foundation and the European Climate Foundation. "The markets for new storage technologies such as batteries, power-to-heat or power-to-gas are nevertheless likely to grow dynamically due to an increasing demand in transportation, heating and chemistry". The study distinguishes between long-term and short-term storage technologies and uses three scenarios to examine different types of storage expansion. These scenarios reflect the foreseeable power system of Germany in 2023 and 2033, as well as the power system with a 90 percent share of renewable energies. In addition to using storage systems to compensate for variable power generation and demand, the study also considered their use for ancillary services. Moreover, the use of storage systems to defer grid expansion at distribution grid level was also examined closely. It was found that battery storage can already today be used cost-effectively in some applications. These niche applications will, however, only reach a limited market volume in the long-term. "New power storage is currently still expensive. However, this can also change quickly. Storage must now already receive equal access to the markets. This applies to markets for flexibility, such as the current ancillary services market or a future capacity market. This also applies to the distribution network, where storage systems can be a tool in the toolbox of distribution grid operators", says Graichen. The study was conducted by a consortium consisting of Fenes (OTH Regensburg), IAEW (RWTH Aachen), ef.Ruhr (TU Dortmund) and ISEA (RWTH Aachen) commissioned by Agora Energiewende.
Honduras Energy Sources Sources: The Library of Congress Country Studies; CIA World Factbook Honduras has for many years relied on fuelwood and biomass (mostly waste products from agricultural production) to supply its energy needs. The country has never been a producer of petroleum and depends on imported oil to fill much of its energy needs. In 1991 Honduras consumed about 16,000 barrels of oil daily. Honduras spent approximately US$143 million, or 13 percent of its total export earnings, to purchase oil in 1991. The country's one small refinery at Puerto Cortés closed in 1993. Various Honduran governments have done little to encourage oil exploration, although substantial oil deposits have long been suspected in the Río Sula valley and offshore along the Caribbean coast. An oil exploration consortium consisting of the Venezuelan state oil company, Venezuelan Petroleum, Inc. (Petróleos de Venezuela, S.A.--PDVSA), Cambria Oil, and Texaco expressed interest in the construction of a refinery at Puerto Castilla in 1993, with production aimed at the local market. Fuelwood and biomass have traditionally met about 67 percent of the country's total energy demand; petroleum, 29 percent; and electricity, 4 percent. In 1987 Honduran households consumed approximately 60 percent of total energy used, transportation and agriculture used about 26 percent, and industry used about 14 percent. Food processing consumed about 50 percent of industrial sector energy, followed by petroleum and chemical manufacturing. Data as of December 1993 NOTE: The information regarding Honduras on this page is re-published from The Library of Congress Country Studies and the CIA World Factbook. No claims are made regarding the accuracy of Honduras Energy Sources information contained here. All suggestions for corrections of any errors about Honduras Energy Sources should be addressed to the Library of Congress and the CIA.
Operating since 1984, we have designed, equipped or put in hundreds of different power power programs in Canada and internationally. In 2017, the Oregon Legislature directed ODOE to conduct a listing of potential biogas and renewable pure gasoline production, feedstock or useful resource quantities and locations, and provide chain infrastructure. Alternatively, nations like Denmark, Portugal, and Spain lead the world in wind power, with Demark receiving almost 1 / 4 of its power from wind generators. Because the early twentieth century, the time period hydropower has been used nearly exclusively along side the modern improvement of hydroelectric energy , which allowed use of distant energy sources. Alternative power sources might be implemented for houses , for cars , factories and another facility you’ll be able to think about. However, the research that was published in this month’s US-primarily based Renewable Vitality journal suggests that wave power technologies could indeed show to be a viable alternative for fossil fuels sooner or later so far as costs are involved. The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a steady conduction of thermal power within the type of heat from the core to the floor. It is clear that various power sources which might be renewable and sustainable are wanted to fill the world’s energy needs. Biomass briquettes are being developed within the developing world as an alternative choice to charcoal. Biomass energy technology typically refers to the combustion of plant materials to power turbines which—in turn—generate electricity. As with burning of fossil fuels, burning biomass releases carbon dioxide and other pollutants. Local governments can dramatically cut back their carbon footprint by purchasing or immediately producing electrical energy from clear, renewable sources. Wind energy might have light as an energy alternative, however a Google-backed startup is poised to revive the trade.
What Can You Save Today? EPA is challenging all citizens to conserve our natural resources and save energy by committing ourselves to: - Reduce more waste; - Reuse and recycle more products; and - Buy more recycled and recyclable products. To Help You Get Started - Reduce Your Packaging: Buy bulk or concentrated products when you can. - Reduce Toxicity or Learn How: Recycle your batteries and use batteries with reduced mercury. - Select Reusable Products: Sturdy, washable utensils, tableware, cloth napkins, and dishcloths can be used many times. - Use Durable Products: Choose furniture, sports equipment, toys, and tools that will stand the test of time. - Reuse Products: Reuse newspaper, boxes, shipping "peanuts," and "bubble wrap" to ship packages. - Recycle Automotive Products: Take car batteries, antifreeze, and motor oil to participating recycling centers. - Buy Products Made From Recycled Material: Many bottles, cans, cereal boxes, containers, and cartons are made from recycled material. - Compost or Learn How: Food scraps and yard waste can become natural soil conditioners.
Coordinating conjunctions simply link ideas. Subordinating conjunctions, on the other hand, also establish a more complex relationship between the clauses. They suggest that one idea depends on another in some way. Maybe there is a cause-and-effect relationship between the two. Or maybe the two clauses simply show a chronological development of ideas. Remember that in most cases the same clauses that are connected by a coordinating conjunction can also be connected by a subordinating conjunction. There is really no difference in meaning; however, the grammar is a bit different. Study the examples given below. - He had not received any formal training in engineering. He was a brilliant mechanic. These two clauses can be combined using the coordinating conjunction but. - He had not received any formal training in engineering but he was a brilliant mechanic. We can also express the same idea using the subordinating conjunction though / although. - Although he had not received any formal training in engineering, he was a brilliant mechanic. The rules of punctuation are very important when we use subordinating conjunctions to join clauses. As a general rule, a subordinate clause that comes at the beginning of a sentence should be separated from the other clause with a comma. You can omit the comma when the subordinate clause goes after the main clause. - Since he had not applied in time, he didn’t get the job. (Here we use a comma to separate the subordinate clause from the main clause.) - He didn’t get the job because he hadn’t applied in time. (Here we do not use a comma because the subordinate clause goes after the main clause.)
Oil theft dates back to the 1970's when Nigeria had the first oil boom. With the return to democratic governance in 1999, many big time oil thieves became political leaders and many political leaders became oil thieves. They formed cult groups in the oil producing areas from which they linked their contacts in government and the security agencies. The younger ones became political thugs, who after elections turned to oil thievery to arm themselves for next election. Many of these armed youths embraced the militancy that shook the foundations of the nation's economy. The armed confrontations between the security forces and the militants ended on 4 October 2009 when the regime of the late Umaru Yar'Adua offered the militants, estimated at 26,000, amnesty in exchange for their unconditional surrender of their arms and return to the confines of the law. Since that deal went through, the shooting in the Niger Delta has quietened down. The upsurge in oil thievery and the proliferation of illegal refineries in the creeks of the Niger Delta have been dramatic. Shell estimates that over 150,000 barrels of crude oil are lost to oil thieves daily. The Minister of Finance, Dr. Ngozi Okonjo-Iweala, has a higher estimate of about one-fifth of the nation's daily oil revenue being lost to oil thieves. It would translate to about 500,000 barrels, or $50 million or N8 billion daily and N2.92 trillion annually, more than half of the 2012 budget. The post-amnesty deal did not give adequate attention to crucial aspects of "cleaning up after the party". What other agreements did Nigeria extract from the militants, beyond the cessation of hostilities? Since they depended on illegal bunkering to procure the arms and logistics while they fought the state, what steps were taken to remove them from this lucrative activity? Oil thieves and their illegal refineries are partly responsible for the rampant oil spills in the creeks, which worsen the environmental challenges, which many of the militants listed as one of their major reasons for embracing the armed struggle. Where is the crackdown on these economic saboteurs the President promised? What steps have been taken to arrest the problem posed by corrupt security officials who see their posting to the region as an opportunity to strike it rich? The oil companies are also accused of involvement in the stealing, which is a highly technical operation, executed with technologies that are not readily available to everyone. Beyond economic sabotage, Nigeria's territorial integrity stands the risk of being unsettled with money and arms these thieves are accumulating. If oil theft is unchecked, Nigeria is setting itself up for more trouble in the Niger Delta!
Basic Ore Processing Haul trucks transport the ore from open pits or underground operations to processing operations. Some ores may be stockpiled for later processing. Rock that is not economical to mine is stored in waste rock storage areas. The grade and type of ore determine the processing method used. Additionally, the geochemical makeup of the ore, including its hardness, sulfur content, carbon content and other minerals found within, impact the cost and methods used to extract gold. Depending on the ore, we process it using the following methods: - We feed ore into a series of crushers and grinding mills to reduce the size of the ore particles and expose the mineral. Water is also added, which turns the ore into a slurry. - We send this slurry to leaching tanks, where we add a weak cyanide solution to the slurry, which leaches gold and silver into the solution. This process removes up to 93 percent of the gold and 70 percent of the silver from the ore. Carbon granules are then added to the solution. The gold attaches to the carbon and is pulled from the solution. - We then “strip” the gold from the carbon by washing it with a caustic cyanide solution. The carbon is later recycled. - Next, we pump the gold-bearing solution through electro-winning cells, which extract metals from the solution using an electrical current. - After gold has been processed, the leftover waste material is called tailings. Tailings contain small amounts of cyanide and other hazardous chemicals, so they must be disposed of in an environmentally safe way. The tailings are stored in tailings dams, which are lined with impermeable layers. While the cyanide levels in the dam are safe, steps are taken to keep wildlife away from the dams. Over time, the chemicals break down and the solids settle to the bottom so that the water can be returned to the plant to be used in processing. - We then smelt the gold, which melts it in a furnace at about 1,202 degrees F. - From there, the liquid gold is poured into molds, creating doré bars. Doré bars are unrefined gold bullion bars containing anywhere from 60 to 95 percent gold. - We finally send the bars to a refinery for further processing into pure gold. Alternative Ore Processing We use alternative gold recovery methods in some processing plants to accommodate different ore characteristics or other requirements. For example, ore that has a high level of sulfide minerals or carbon (or both) is called refractory ore. Refractory ore resists normal processing methods as the high sulfide minerals trap gold particles, making it difficult for the cyanide to reach the gold and leach it. To leach gold from refractory ore, it must be subjected to high temperature, high pressure and/or oxygen. Newmont treats refractory ore in two ways: by using an autoclave or a roaster. An autoclave is used before leaching occurs. First, the slurry is heated and fed into an autoclave, where high-pressure steam, water and oxygen are applied to oxidize the sulfide material by a chemical reaction. The slurry is then cooled and sent back into the process to be leached. An alternative to an autoclave is a roaster, a very high temperature oven that is often used instead of an autoclave if the ore to process contains a large amount of organic carbon. Roasting uses heat and air to burn the organic carbon into fuel and to burn the sulfur off ore, which we heat to 932 to 1,202 degrees F. In heap leaching, we dump crushed ore into piles called heaps, to which we apply a weak cyanide solution, using drip feeders. The gold dissolves into the cyanide solution. The entire heap leach area is lined with heavy duty liners to ensure no solution leaks into the environment. Next, we collect the gold-cyanide solution in ditches and ponds, and then transport it to a recovery plant. Flotation is a method of separating minerals depending on their ability to attach to air bubbles. Flotation can be used for a number of materials by adjusting the chemicals. At Newmont, it is used for copper recovery and, in a very limited number of cases, for gold processing. We introduce air bubbles to the slurry while it is in small tanks, called flotation cells. We add some chemicals to the slurry to assist the process. The desired minerals stick to the bubbles and rise to the top, resulting in froth. The froth overflows from the tank, and is removed and sent to the next step in processing. A gravity circuit recovers coarse gold before it is leached. Gravity circuits use the same principles as gold panning: coarse gold is heavier than other material and will settle to the bottom so that it can be removed (gold is 19.3 times heavier than an equal volume of water).
REDUCING FOOD WASTE Reducing waste by half would prevent close to 50 million tons of waste from entering into the waste stream. Composting is the least expensive alternate disposal method, but packaging is an issue for waste recycling. Because food recycling centers cannot accept plastics, waste generators must set up recycling programs and educate staff about recycling techniques in an industry with little infrastructure and few regulations. Anaerobic digestion, another method of managing waste, creates heat, steam, electricity or gas as a by-product. While anaerobic digestion systems take a larger initial investment, costs are decreasing as technology improves. “Unfortunately, only 2-3 percent of the food waste being generated actually makes it to an alternate recycling or alternate disposal method at this point.” The current waste management system perpetuates the need for larger landfills and disposal sites for the 350 million tons of waste generated in the United States annually. Composting sites are monitored at the state level by an environmental protection department. Waste recycling restrictions are similar from state to state because most states adopt each other’s regulations. “It’s something that comes along with the territory,” Mr. Manna added. DIVING INTO THE WASTE STREAM With help, some organizations, such as hospitals, restaurants and supermarkets, are beginning to see composting as part of the solution. In the ’90s, Mr. Manna realized that New Jersey’s recycling rates had hit a plateau. As more fast food chains opened, food packaging increased and pre-made, on-the-go foods gained popularity. He realized that food waste would be the next frontier in recycling. When he first began speaking about recycling, he faced small, under-educated audiences who did not see food waste as an issue for their times. Since then, businesses and their customers began to see themselves as part of the solution, and attention to recycling increased. “Over the last 10 years or so, it has really started to pick up steam. More people are seeing the value in recycling food waste.” He now develops food waste recycling programs tailored to his clients, the corporate cafés, supermarkets and casinos generating waste. He starts by analyzing waste. With 20 years of experience, he is not afraid to dumpster dive. Mr. Manna visits his customers’ facilities and separates food from trash in order to measure how much waste the company generates. “If you know what’s in the waste stream, you can find ways to divert it or avoid it totally,” Mr. Manna said. “I think that’s a key to any business operation.” The food waste recycling industry is growing. Beyond its more obvious environmental impact, the industry affects the economy, creating jobs and savings that help waste generators as well as local and state economies. Mr. Manna, who sits on the U.S. Composting Council’s board of directors, has watched the industry develop over the decades. Thousands of people attend the council’s annual conference and the attendees are diverse. Over the years more company representatives began making appearances, which Mr. Manna sees as evidence of the industry’s foothold in the larger economy. “These are green jobs that we’re creating. This is a growing industry, and it can help our economy.” CASE STUDY: WEIS MARKETS Weis Markets, Inc. is determined to make inroads in the disposal of their waste. The company is a member of the Food Marketing Institute, a national association that helps stores fight food spoilage and improve distribution methods. The institute asks members of trade organizations to find ways to reduce waste other than traditional waste management. They find that location makes the difference. Each Weis store approaches its waste management differently because of the unique circumstances each branch faces. Factors such as whether a store uses compactors or dumpsters, as well as the store’s size and the frequency of trash pickups, affect whether a store will be considered for the recycling program. Weis’s goal to reduce its carbon footprint includes managing food waste through its food donation policy and composting practices. “There are higher uses [for food] than just having it go to a landfill,” said Patti Olenick, sustainability officer for Weis Markets. The initial waste recycling pilot program in 2009 was not the success they had hoped for. The pilot focused on nine stores around the chain’s Harrisburg headquarters, a region with low waste management costs. However, the pilot faced complications from the beginning. Weis hoped to reduce the cost of waste disposal by cutting its trash volume by half. But the additional cost of composting still had to be factored in, which meant that pilot stores with already low waste disposal costs saw overall costs rise. One of the reasons was that Weis had to pay an independent hauler to transport out-of-date food to the composting farm. There, the company was charged an additional tipping fee (cost for disposal per ton) to unload the cargo. As a result, the first pilot failed from a business perspective because the costs involved in transporting and recycling food proved higher than the cost of sending the waste to a landfill. Based on data analysis from the first pilot, some stores will never geographically make the business case for food recycling. Ms. Olenick came on board shortly after the failed pilot program and committed herself to reevaluating the possibilities. After analyzing every store, she came up with 50 locations to begin recycling food waste. “We now go to areas where it’s making economic sense,” Ms. Olenick added.
In 2011, a Democratic Congressional Investigating Committee determined that 14 leading oil companies used 2500 different toxic fracking liquids. The liquids contained 750 different chemicals, including 270 undisclosed ingredients protected by industry trade secret laws which the oil and gas service companies themselves were unable to identify. Prior to the onset of fracking, and in total disregard of a vast literature of previous work on water ingress, retention and release in deep geological formations, oil and gas industry scientists and environmental officials argued that deep layers of rock beneath the earth would safely entomb the liquid waste for millennia. They were dead wrong and should have known it. For more than 50 years, scientists hired to advise the NRC, the DOE and the EPA on safe disposal of chemical and nuclear wastes, argued that deep underground disposal of toxic liquids is indefensible and dangerous. Only the EPA ignored the advice. The results of Environmental Impact studies demonstrating overwhelming uncertainties in the travel and retention of liquids in a variety of deep underground geologies, led to the promulgation of federal rules for disposal of high level and low level toxic wastes that required, not only solidification and/or vitrification, but further encapsulation in an impermeable "Waste Package" or container. This was required for for medical and chemical wastes as well as radioactive wastes. In contrast to federal disposal sites, in the last 60 years, US industries have injected more than 30 trillion gallons of toxic waste into deep underground wells in more than 30 states, under the almost non-existent regulation of Congress, the States and the EPA. In 1980, California Rep. Henry Waxman sponsored a measure that allowed the EPA to delegate authority to oversee toxic waste injection to state oil and gas regulators, even if the rules they applied varied from the Safe Drinking Water Act and federal guidelines. A few years later, Dick Stamets, New Mexico's chief oil and gas regulator at the time, told a crowd of state regulators and industry representatives that the Waxman amendment was a biblical deliverance from oppressive federal oversight for the drilling industry. "The Pharaoh EPA did propose regulations and there was chaos upon the earth," Stamets said. "The people groaned and labored, and great was their suffering until Moses Section 1425 (the Waxman amendment) did lead them to the Promised Land." The Promised Land Was Even Better Than Expected The Waxman amendment and EPA's abrogation of its own regulatory authority was a magnanimous gift to the oil and gas industry. It allowed the development of the concept of a "Class 2" well, an exclusive toilet for the oil and gas industry. Toxic wastes from factories or refineries were prohibited, but the same wastes from the energy industry were permitted with little if any oversight as long as they resulted from drilling. There are now more than 150,000 Class 2 wells in 33 states, into which oil and gas drillers have injected at least 10 trillion gallons of fluid of unknown composition under conditions of no regulation and no analyses. Neither Waxman nor Congress, nor the states, nor the EPA was prepared for the simultaneous discoveries of monumental deposits of shale containing natural gas and new technologically advanced fracking techniques which produced an enormous surge of new liquid toxic fracking waste which is being injected with essentially no regulation into new and old wells. In a complete turnabout, on April 16, 2011 a Democratic Congressional Investigating Committee headed by Waxman determined that between 2005 and 2009, 14 leading oil companies used 2500 different toxic fracking liquids. The liquids contained 750 different chemicals, including 270 undisclosed ingredients protected by industry trade secret laws which the oil and gas service companies themselves were unable to identify. Diana DeGette, Ranking Member of the Oversight and Investigations Subcommittee wrote, "It is deeply disturbing to discover the content and quantity of toxic chemicals… being injected into the ground without the knowledge of the communities whose health could be affected. Of particular concern to me is that we learned that over the four-year period studied, over one and a half million gallons of carcinogens were injected into the ground in Colorado. Many companies were also unable to even identify some of the chemicals they were using in their own activities" Regulatory Capture and EPA Apologists The dangers of injection have been known for half a century. In accidents dating back to the 1960s, toxic materials have bubbled up to the surface or escaped, contaminating aquifers that store supplies of drinking water. There are more than 680,000 underground waste and injection wells nationwide many of which are releasing toxins into the environment. Of more than 220,000 federal and state well inspections from late 2007 to late 2010, one violation was issued for every six deep injection wells examined.Penalties for injection well violations are rare and always trivial. Prosecution and punishment have never occurred. The US Environmental Protection Agency, which has primary regulatory authority over the nation's injection wells, would not discuss specific well failures identified by ProPublica or make staffers available for interviews. The agency also declined to answer many questions in writing, although it sent responses to several. Its director for the Drinking Water Protection Division, Ann Codrington, sent a statement to ProPublica defending the injection program's effectiveness. "Underground injection has been and continues to be a viable technique for subsurface storage and disposal of fluids when properly done," the statement said. "EPA recognizes that more can be done to enhance drinking water safeguards and, along with states and tribes, will work to improve the efficiency of the underground injection control program." The "Final Solution" – EPA's Toxic Antidote to Toxic Waste The energy industry won a monumental critical change in the federal government's legal definition of waste, when in 1988, EPA ruled that all material resulting from the oil and gas drilling process is considered non-hazardous, regardless of its content or toxicity. "It took a lot of talking to sell the EPA on that and there are still a lot of people that don't like it," Bill Bryson, a geologist and former head of the Kansas Corporation Commission's Conservation Division, who lobbied for and helped draft the federal rules was reported to have said by ProPublica. "But it seemed the best way to protect the environment and to stop everybody from just having to test everything all the time."(author's emphasis) [Use of fracturing fluids in hydraulic fracturing operations was explicitly excluded from regulation under the American Clean Water Act in 2005, except for diesel-based additive fracturing fluids, which have a higher proportion of volatile organic compounds and carcinogenic BTEX than other fracking fluids.] Neither Congress nor the EPA has raised the 2 obvious questions that would concern chemists with responsibility for accident analyses: -Since the oil and gas industry do not know what chemicals are in their fracking mixtures, why do they need 2500 different ones? -Are the unaccountably enormous number of ill-defined mixtures and chemicals being used to cover-up widespread disposal of illegal toxins? Congress and the EPA have allowed the oil and gas industry to dump any and all their toxic materials in "Class 2" wells under the guise "oil field drillings" without supervision or analyses. The EPA has ignored its regulatory responsibilities and relies "heavily on an honor system in which companies are supposed to report what they are pumping into the earth, whether their wells are structurally sound, and whether they have violated any rules." In other work, I have noted that more than a dozen government agencies, including the EPA, are under investigation for ethics violations. Unfortunately, experience has shown that government investigations of its own failings are like treadmills - no matter how long it takes and no matter how much you huff and puff, you end up in the same place you started. For regulatory agencies and the industries they monitor through "revolving doors," "ethics and honor" are in the archaic portion of the vocabulary. In the Orwellian puddle of American acronyms, EPA has been transformed into the Environmental Pollution Agency. R.I.P. EPA
Demonstration activity in a small island of the Canary Islands, on the integration of renewable energy and energy storage. As a result a company was created to installed and operate the system, and in the process jobs have been created. The story shows how energy policy for promotion of renewable energies can have other positive externalities, besides the most obvious one of reducing energy dependence from imported polluting fossil fuels, and reduction of Green House Gas emissions. Projects on the promotion of renewable energy in European islands have added social benefits from the creation of local quality jobs, especially for the young. These jobs are not only created during the installation phase of the systems, but afterwards in the operation and maintenance of the renewable energy infrastructures. The Wind-pumped-hydro power station of El Hierro began by being a project proposal submitted by ITC, the Island Authority of El Hierro and ENDESA (local Utility), to a call of the 5th FP of the EC. Project was approved and granted EC financing for a period that went from 2002 to 2007. Main deliverables of the project were the engineering document of the system and the creation of the company GORONA DEL VIENTO, which installed and is currently operating the system that provides clean RES to cover 60 % of electricity yearly demand of the island. The wind-pumped-hydro system includes a wind farm (11.5 MW), a hydroelectric plant (11.32 MW), a pumping station (6 MW) and two water reservoirs (height difference of 700 m). Wind turbines supply electricity to island loads, and surplus energy to a pumping station that raises water from the lower reservoir (150,000 m³ at sea level) to the upper reservoir (380,000 m³ at 700 m) to store energy. When the wind does not blow, the water is allowed to fall to the hydro station to generate electricity. When there is not enough wind and no stored water, the diesel thermal power plant that had previously been on the island and remains for backup, comes into operation. During the twenty years after its commissioning in 2014 (the wind-pumped-hydro system it is expected to avoid the consumption of 6,000 tons of diesel (40,000 barrels of oil) and the emission of 19,000 tons of CO2. Evidence of success (results achieved): - Renewable energy sources (RES) penetration of 60 % in the isolated electric island system of El Hierro. This means reductions of 60 % on imported fossil fuel for electricity generation, and 60 % reductions on Green House Gas emissions in the electricity generation sector. - Creation of 18 high quality direct permanent jobs in operation. Short summary of the practice: - Island population: 10,587 inhabitants - Wind farm: 11.5 MW (5 x 2,3 MW each) - Pump station: 6 MW (2 x1.5 MW + 6 x 0.5 MW) - Hydro station: 11.32 MW (4 x 2.83 MW) - Lower reservoir: 150,000 m³ at sea level - Upper reservoir: 400,000 m³ at height 700 m Main difficulties were the need for financing of the high initial investment cost of the project (83 M€). The long payback period and uncertainty on evolution of conventional fossil fuel made risky the investment on the Wind-pumped-hydro power station. The solution came from a capital grant given by the central Spanish government (35 M€), and a retribution scheme that allow cash flow and financial capacity to return the payment of a bank loan needed. Job creation was not initially a main concerned of the project, because the main objective was to maximize RES penetration in the island electrical grid. Nevertheless the project managed to create local jobs during the installation phase, and now permanent jobs in operation and maintenance of the system. The project has a high possibility of being replicated in other islands around the world, for maximizing RES and creating quality jobs, that can help mitigate the problem of unemployment among the young. As a matter of fact in the Greek island of Ikaria a similar system has been put in operation, inspired on the previous experience in El Hierro Island. The Wind-pumped-hydro system is providing a solution for energy self-supplied which is sustainable form both the environmental and economic perspective. Is allowing creating economic value from a local resource, the wind, and in the process is creating jobs allowing mitigating the problem of local young unemployment. Find out more on: http://www.goronadelviento.es/
Federal Wage System: Overview & Facts Facts About the Federal Wage System The Federal Wage System (FWS) was developed to make the pay of Federal blue-collar workers comparable to prevailing private sector rates in each local wage area. Before the FWS, there was no central authority to establish wage equity for Federal trade, craft, and laboring employees. In 1965, President Johnson ordered the former Civil Service Commission to work with Federal agencies and labor organizations to study the different agency systems and combine them into a single wage system that would be sensible and just. - The Federal Wage System: Introduction - Chapter 2:The Federal Wage System: Overview and Facts - Chapter 3:What Determines Which Grade and/or Step You Fall Under? - Chapter 4:General Schedule Pay Scale Ranges - Chapter 5:General Schedule Pay Scale Detailed Chart - Chapter 6:Frequently Asked Questions About the GS Pay Scale - Chapter 7:Federal Salaries: How to Talk Your Way Up the Scale - Chapter 8:Related Articles and Links The President called for common job-grading standards and wage policies and practices that would ensure inter-agency equity in wage rates. He established two basic principles for these policies and practices: ● Wages will be set according to local prevailing rates. ● There will be equal pay for equal work and pay distinctions in keeping with work distinctions. Congress established the FWS by law in 1972. It created a joint labor-management Federal Prevailing Rate Advisory Committee (FPRAC) with an independent Chairman. Agencies and labor unions are members of the Committee. FPRAC studies all matters pertaining to prevailing rate determinations and advises the Director of the Office of Personnel Management (OPM) on appropriate pay policies for FWS employees. The goal of the system is to pay employees according to local prevailing rates. The regular pay plan covers most trade, craft, and laboring employees in the executive branch. The FWS does not cover Postal Service employees, legislative branch employees, or employees of private sector contracting firms. Special pay plans cover certain employees in special circumstances. OPM authorizes special pay plans when unusual labor market conditions seriously handicap agencies in recruiting and retaining qualified employees. The FWS is a partnership worked out between OPM, other Federal agencies, and labor organizations. OPM prescribes basic policies and procedures to ensure uniform pay-setting. OPM specifies procedures for agencies to design and conduct wage surveys, to construct wage schedules, to grade levels of work, and to administer basic and premium pay for employees. To issue common job-grading standards for major occupations, OPM occupational specialists follow specific steps to develop new standards and to update existing standards. They make full occupational studies, which include onsite visits to interview employees, supervisors, and union representatives. Specialists write standards and ask agencies and unions for comments that are carefully considered and, where appropriate, incorporated into final job-grading standards. Federal agencies are required to apply these standards. OPM defines the geographic boundaries of individual local wage areas and reviews survey job descriptions to ensure that they are accurate and current. In addition, OPM works with agencies and unions to schedule annual local wage surveys in each wage area. Wage adjustments become effective in accordance with what is commonly referred to as the 45-day law. This law states that the Government has 45 working days to put FWS pay adjustments into effect after each wage survey starts. Wage schedules are effective with the first pay period after the 45-day period expires. The Department of Defense (DOD) is the lead agency responsible for issuing FWS wage schedules. You can find wage schedules at www.cpms.osd.mil/wage or by calling DOD directly at (703) 696-1746. Setting the Wages in Your Area For each wage area, OPM identifies a "lead"agency. The “lead” agency is responsible for conducting wage surveys, analyzing data, and issuing wage schedules under the policies and procedures prescribed by OPM. All agencies in a wage area pay their hourly wage employees according to the wage schedules developed by the lead agency. OPM has identified DOD as the lead agency for each local wage area. OPM does not conduct local wage surveys. Labor organizations play an important role in the wage determination process by providing representatives at all levels of the wage determination process. The employee unions having the greatest number of wage employees under exclusive recognition designate two of the five members of a lead agency’s national level wage committee. Locally, the union with the most employees under exclusive recognition in a wage area designates one of the three members of each Local Wage Survey Committee. In addition, labor organizations nominate half of the Federal employees who collect wage data from private enterprise employers. A partnership team of one labor data collector and one management data collector visits each surveyed employer. Comparability: Wage System and General Schedule Under the FWS, your employer bases your pay on what private industry is paying for comparable levels of work in your local wage area. Employees are paid the full prevailing rate at step 2 of each grade level. Step 5, the highest step in the FWS, is 12 percent above the prevailing rate of pay. The General Schedule (GS) is a separate pay system covering most white-collar civilian Federal employees. Surveys of non-Federal employers (including State and local governments) determine the pay for GS employees. There are a number of other differences between the GS and FWS in terms of occupational coverage, geographic coverage, pay ranges, and pay adjustment cycles. Pay Caps and Delays The preceding sections describe the basic structure of the FWS and its method of setting wages based on local prevailing rates. However, specific legislation may limit or delay annual wage adjustments for some FWS employees. If you have any questions about the system and how it affects you, please contact your local personnel office.
A significant effort is being placed on silicon carbide ceramic matrix composite (SiC CMC) nuclear fuel cladding by Light Water Reactor Sustainability (LWRS) Advanced Light Water Reactor Nuclear Fuels Pathway. The intent of this work is to invest in a high-risk, high-reward technology that can be introduced in a relatively short time. The LWRS goal is to demonstrate successful advanced fuels technology that suitable for commercial development to support nuclear relicensing. Ceramic matrix composites are an established non-nuclear technology that utilizes ceramic fibers embedded in a ceramic matrix. A thin interfacial layer between the fibers and the matrix allows for ductile behavior. The SiC CMC has relatively high strength at high reactor accident temperatures when compared to metallic cladding. SiC also has a very low chemical reactivity and doesn’t react exothermically with the reactor cooling water. The radiation behavior of SiC has also been studied extensively as structural fusion system components. The SiC CMC technology is in the early stages of development and will need to mature before confidence in the developed designs can created. The advanced SiC CMC materials do offer the potential for greatly improved safety because of their high temperature strength, chemical stability and reduced hydrogen generation. Enlarged Halden Programme Group Meeting 2011
Oil-based fuels and chemical compounds are vital to the world economy. A Petroleum Engineer designs the equipment and systems used to extract oil and gas used in the creation of these products. Petroleum Engineers design drill equipment for onshore and offshore activities and recommend plans based on the cost, the effort involved, and the potential return on investment of time and resources. Petroleum Engineers also evaluate the production of oil and gas wells through surveys and testing. Petroleum Engineers work with geoscientists and other specialists to understand the rock formations surrounding oil and gas deposits. For this reason, Petroleum Engineers must possess interpersonal communication and teamwork skills. A Petroleum Engineer must also have advanced research skills, as they are often required to find new ways to extract as much oil and gas as possible from underground reserves. Typical Duties and Responsibilities - Design equipment for extracting oil and gas from onshore and offshore reserves deep underground - Create plans for drilling in oil and gas fields, and then recovering the oil and gas - Develop ways to inject water, chemicals, gases, or steam into an oil reserve to force out more oil or gas - Ensure oilfield equipment is installed, operated, and maintained properly - Evaluate the production of wells through surveys, testing, and analysis Education and Background This position requires a bachelor’s degree in engineering, preferably with a major in petroleum engineering. Also acceptable are job candidates with mechanical, civil, or chemical engineering degrees. Some employers prefer applicants with a master’s degree or Ph.D. for certain positions. Skills and Competencies - Analytical, problem-solving and critical-thinking skills - Advanced mathematics skills - Teamwork and interpersonal communication skills - Strong technical writing ability - Experience with database and spreadsheet software - Demonstrated expertise in solving highly technical problems - Working knowledge of geology or thermodynamics According to Payscale the median annual salary of a Petroleum Engineer with 1 Year of Experience: - Orlando, Florida: $70,000 - Tampa, Florida: $73,000 - Jacksonville, Florida: $74,000 - Miami, Florida: $79,000 - Atlanta, Georgia: $70,000 - Chicago, Illinois: $80,000 - Houston, Texas: $75,000 - Los Angeles, California: $84,000 - New York City, New York: $89,000 - Seattle, Washington: $79,000 - Overall: $75,000 5 Years of Experience: - Orlando, Florida: $82,000 - Tampa, Florida: $83,000 - Jacksonville, Florida: $85,000 - Miami, Florida: $88,000 - Atlanta, Georgia: $80,000 - Chicago, Illinois: $90,000 - Houston, Texas: $80,000 - Los Angeles, California: $91,000 - New York City, New York: $94,000 - Seattle, Washington: $85,000 - Overall: $89,000 Similar Job Titles - Energy Engineer - Field Engineer - Reservoir Engineer - Drilling Engineer - Completions Engineer - Production Engineer This position requires a bachelor’s degree in engineering, preferably with a major in petroleum engineering. Also acceptable are job candidates with mechanical, civil, or chemical engineering degrees. Some employers prefer applicants with a master’s degree or Ph.D. for certain positions. Many companies also require a Professional Engineer (PE) license, which requires a passing score on the Fundamentals of Engineering exam, on-the-job experience, and a passing score on the Principles and Practice of Engineering Examination. Successful Petroleum Engineers can advance to overseeing larger-scale drilling and extraction projects and receive greater independence to design and evaluate projects and processes. According to the professional services firm Deloitte, one key trend in the Petroleum Engineering field is the decreasing global demand for oil. One reason for that is the push in some countries to move away from dependence on fossil fuels. Another is that petroleum production has increased in several countries, leading to a more secure supply of oil. Another trend in the industry is the expectation of increased efficiency from oil and gas investors. Despite the global economic slowdown and another slowdown in production from oil and gas wells, investors are looking to petroleum companies to find new ways to save costs. They say investors could be hesitant to spend more money in the oil and gas industry until they see it. According to the U.S. Bureau of Labor Statistics, the need for Petroleum Engineers is expected to grow by three percent between 2018 and 2028. That’s slower than average. The typical work hours for a Petroleum Engineer are from 9 a.m. to 5 p.m. However, many Petroleum Engineers work longer hours to hit project deadlines or troubleshoot production issues. Where You Can Find Jobs - 4 Corner Resources - Career Builder - Zip Recruiter - Engineering Jobs - National Society of Professional Engineers Are You Interested in Becoming a Petroleum Engineer? We will connect you to one of our headhunters or recruiters to see if you are a perfect fit for one of our job openings. If a job opening does not suit you, we will always keep you in mind as new positions open up. We have vast experience connecting professionals with some of the most well-known organizations in the country. Your next job or career path can be right around the corner. Check out our latest job openings and our blog for career advice. Feel free to contact us at any time.
Marysville Cotton Mill National Historic Site of Canada Marysville Cotton Mill Filature de coton de Marysville Links and documents 1883/01/01 to 1885/01/01 Listed on the Canadian Register: Statement of Significance Description of Historic Place The Marysville Cotton Mill National Historic Site of Canada is the focal point of the Marysville Historic District National Historic Site of Canada. Rehabilitated to serve as government offices, the imposing, four-storey, red-brick cotton mill building features a flat-roofed central tower, and numerous multi-pane mullion windows. Located within the former settlement of Marysville, a model community built to house the mill workers, and the building is situated within the block bounded by McGloin, Fisher, Duke, Marshall and Bridge Streets. The official recognition refers to the mill building on the legal lot. Marysville Cotton Mill was designated a national historic site of Canada in 1986 because: - it is a representative example of the brick pier Cotton Mills that were common in the Canadian textile industry during its expansionist phase. Industrialist Alexander "Boss" Gibson built this cotton mill between 1883 and 1885. Designed by the Boston architectural firm of Lockwood, Greene and Company Mill Architects and Engineers, the construction of the mill was influenced by New England models and is a classic example of the brick “insurance mill” of the late 19th-century. This four-storey building was constructed of locally made brick and features brick pier construction, a central water tower and fire-retardant materials on the interior. By 1900 Marysville Cotton Mill was among the largest mills in Canada. The mill was designed on the “slow-burning” principle and was state-of-the-art for its time, incorporating not only electric lighting, but all those features characteristic of plants whose power was provided from a central plant and distributed by belts, pulleys and overhead shafting to machinery whose location within the complex was dictated by its place within the production framework. Despite its seemingly remote location, the mill was designed to supply a national market and did so throughout its working career. The mill continued manufacturing textiles until the late 1970s. Source: Historic Sites and Monuments Board of Canada, Minutes, June 1986. Key elements contributing to the heritage value of this site include: - the spatial relationship of the mill building to the various components of the Marysville Historic District National Historic Site of Canada, to the river, to existing and former mill sites, and to the former rail line, now a hiking trail; - the imposing scale of the mill building in relation to the surrounding buildings; - the four-storey rectangular massing of the building; - its four-storey main elevation with central tower; - its brick pier construction; - the large scale and regularity of the fenestration; - the decorative brickwork particularly along the main façade; - surviving evidence of those standard features of a 19th-century cotton mill; - the surviving two-storey “Annex,” formerly the Dye House, with its brick construction, regular fenestration, and open interior spaces. Government of Canada Historic Sites and Monuments Act National Historic Site of Canada 1885/01/01 to 1973/01/01 Theme - Category and Type - Developing Economies - Trade and Commerce - Expressing Intellectual and Cultural Life - Architecture and Design Function - Category and Type - Office or office building - Textile or Leather Manufacturing Facility Architect / Designer Lockwood, Greene & Company Mill Architects and Engineers Location of Supporting Documentation National Historic Sites Directorate, Documentation Centre, 5th Floor, Room 89, 25 Eddy Street, Gatineau, Québec. Cross-Reference to Collection
In the fifth section, the fracture toughness modes and energy release rate were studied and the crack initiation and structure occurred in welding zone were discussed. Zinc Purity Standard: ISO 3815. In order to simulate dew condensation, the temperature of carbon steel was controlled by a cooling-system. They're easier to weld and more scratch resistant than zinc-galvanized steel sheets. Bolted connections 13.1 Nuts and bolts 13.2 Contact surfaces 13.3 Inserting holes 13.4 Assembly 14. Each framework station had chloride and sulfur dioxide meters as well as a weather station, to measure temperature, environmental humidity, rainfall, wind speed and solar radiation. This was not the case with low carbon steel. These categories were C2, for Laja; C3 and C4, for the Arica and Antarctic stations, respectively; and the most aggressive, C5 and higher at Quintero. Ph. After different exposure periods up to 33 months, the samples were tested and analyzed. In this study, multiple factors related to the resulting absorbed energy have been discussed. The new LAS were designated 1605A, 1605B, 1604A, and 1604B. On the basis of the corrosion system, details of the fracture and fatigue characteristics are discussed The unique nature of the galvanizing process provides a tough and abrasion resistant coating which means less site damage and speedy erection of structures. A water and/or air electrochemical The fracture is extremely ductile with a dimpled fibrous surface and secondary cracks. The effect of environmental climate on the steel corrosion rate in concrete was further determined. The mass ratio (α/γ) of crystalline α-FeOOH to γ-FeOOH, in the rust layer formed on the weathering steel exposed in an industrial environment, increases with an increase in exposure duration. The toughness of steel and its ability to resist brittle fracture are dependent on a number of factors that should be considered at the specification stage. Carvajal, Rev. EXPERIMENTAL STUDIES. See ‘Abrasion resistance’ page 13. I. /Angle 45 These specific environments significantly influenced the mechanical responses of steel exposed for 36 months. Charpy impact test is carried out on the samples taken in both along the direction of deposition and in the direction perpendicular to it to analyze the impact toughness in different directions. When using SMAW (“stick”) welding, Galvanized Steel Pipe can be welded in the same manner as uncoated steel. Dies wiederum kann zur Triggerung korrosiver Metallauflösung an Rissspitzen niedriglegierter Stähle in sauerstoffhaltigem Hochtemperaturwasser und somit zu kontinuierlichem Risswachstum mit beschleunigter Risswachstumsrate unter konstanter Belastung bei vergleichsweise niedrigen Spannungsintensitätsfaktoren mit entsprechend kritischer Dehnrate an der Rissspitze führen. Quality Galvanized Steel Pipe manufacturers exporter - buy Threaded Galvanized Steel Pipe , High Toughness Industrial Galvanised Pipe from China manufacturer. Further, the steel exhibits a slight 300 °C temper embrittlement phenomenon. Get a Quote Fast. Finally, combining the results of the accelerated tests and the rust layer analysis showed that low-alloy steels, such as 1605A and 1605B, have better weathering steel properties than Acr-Ten A for use in the humid and salty weather. Surface regions of the galvanized member in the weld zone are selectively pretreated to remove the zinc coating and apply a nickel-base coating. Atmospheric corrosivity categories at each station under study were determined. /Type /ExtGState The unique metallurgical structure of the galvanized coating provides outstanding toughness and resistance to mechanical damage in transport, erection and service. The compositional change of rust (corrosion products) layer formed on weathering steel exposed to atmosphere with different amount of air-borne sea salt particles in Japan have been investigated by the X-ray diffraction method. Investigation of inhibitive action of urea-Zn2+ system in the corrosion control of carbon steel in s... A packaging concept for the new millennium. << Hot Dip Galvanizing. The yield strength, tensile strength, and percentage elongation of galvanized steel sheet are measured. fatigue strength. Eight steel plates were fabricated with varying C, Cr, and Nb additions under two different cooling rates, and their microstructures, tensile, and Charpy impact properties were evaluated. In this work, galvanized Q&P980 steel sheets were welded with LCS sheets. The present paper analyses this aspect of ISO 9223, focusing on the effects of metal composition, when using carbon steel, in corrosivity categorisation. plots. source, type, model parameters and applicable conditions. The Charpy impact test was performed after extracting samples in directions both parallel and perpendicular to the deposition direction. The product of phosphonomethylation of hydroxylamine-O-sulfonic acid showed the highest inhibiting performance (corrosion rate 0,018 mm/year, corrosion inhibition efficiency 96%, scale, The Swedish steel industry has combined traditional methods such as life cycle analysis with less traditional methods such as preference analysis in order to move towards a closed steel eco cycle. The published BNF report ‘Galvanizing of structural steels and their weldments’ ILZRO, 1975, concludes that ‘… the galv… Polarization study reveals that Urea – Zn 2+ system controls the cathodic reaction predominantly and suggests the formation of protective film on the metal surface. LatinAm. change of the surface morphology clearly affected the fatigue strength, e.g., the rougher the sample surface, the lower the The soil cover of the near north of Chile (Norte Chico) is described on the basis of field materials, analytical data, and an analysis of available literature. ferrous artefacts were characterised. Vapor-phase corrosion inhibitor stretch film packaging is ready to meet the packaging challenges in the new millennium. Daher ist die Anfälligkeit gegen dynamische Reckalterung als weiterer wichtiger Werkstoffparameter für die Bewertung und Diskussion mediumgestützter Risskorrosion niedriglegierter Stähle in sauerstoffhaltigem Hochtemperaturwasser anzusehen. The galvanized coating becomes part of the steel surface it protects. Strength is the ability of a material to resist stresses. Pure iron (no carbon) is very ductile and cast iron (considerable carbon is brittle). This investigation aims to analyse the effect of the exposure angle on the corrosion rate of mild steel. distinguished visually and by spectroscopy. ���� JFIF � � �� C Additive manufacturing of metals is an innovative near-net-shaped manufacturing technology used for producing final solid objects by depositing successive layers of material melted in powder or wire form using a focused heat source directed from an electron beam, laser beam, or plasma or electric arc. According to the results, the toughness of the steel without galvanization can vary from 70 to 10 J; this variation reveals a dramatic change of property that is as much a function of the different atmospheres as of the exposure time. The formulation consisting of 250 ppm urea and 50 ppm Zn 2+ has 94% IE. Device used: Universal tensile testing machine. The results indicate that the steel acquires the optimum fracture toughness properties at a hardness of 305 HB, corresponding to a tempering temperature of 630 °C. Galvanized steel pipe is most commonly used in plumbing and other water-supply applications. 46 (2004) 1401-1429. V. Kucera, E. Mattsson, Corrosion Mechanisms, Marcel Dekker, New York (1987). galvanized steel 6.1 Tensile strength, notch toughness and ductility 6.2 Weld stresses 6.3 Fatigue strength 6.4 Brittleness and cracking 6.5 Hot-dip galvanized steel and fire 6.6 Hot-dip galvanized steel exposed to elevated temperatures 6.7 Durability against wear 13. But trisodium citrate is better than tartaric acid as corrosion inhibitor. The passive metal surface around a local deposition is shown to participate in the corrosion process. Fracture toughness of RSW lap joints were calculated from the results of shearing tensile tests: the dependence of fracture toughness to welding current, welding time, and hardness of welding zone for galvanized DP600 steel sheets. Daraus ist zu schließen, dass bei entsprechender Anfälligkeit eines Werkstoffs gegen dynamische Reckalterung von einer Begünstigung von lokalen Verletzungen der oxidischen Deckschichten in sauerstoffhaltigem Hochtemperaturwasser z.B. $4�%�&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz�������������������������������������������������������������������������� ? (2012) 61-72. Für das mediumgestützte Risswachstum niedriglegierter Stähle in sauerstoffhaltigem Hochtemperaturwasser ist die dynamische Reckalterung keine notwendige Einflussgröße, begünstigt aber bei stark reckalterungsempfindlichen Werkstoffen die Rissbildung an glatten Oberflächen und das mediumgestützte Risswachstum. Q235 vs Q195. The microstructure of the hot rolled carbon steel contained ferrite/pearlite phases, while that of the quenched and tempered low alloy steel contained bainite structure. The procedure design was based on an intergranular corrosion/thermal stress interactive initiation mechanism and an environmentally assisted fatigue crack propagation mechanism. Masters, Atmospheric Corrosion, Wiley-Interscience, New York (1982). endobj Strong oxidation occurred in all samples with the formation of rust. Rust characterisation of ancient ferrous artefacts exposed to indoor atmospheric corrosion, The effect of temperature and loading rate on the crack initiation and propagation energy in carbon steel Charpy specimens, Corrosion simulation of carbon steels in atmospheric environment, Effect of atmospheric and marine corrosive environments on tensile, impact and hardness properties of some steels, Rate-determining reactions of atmospheric corrosion, Effects of atmospheric corrosion on fatigue properties of a medium carbon steel, Corrosion resistance and mechanical properties of low-alloy steels under atmospheric conditions, Effect of atmospheric pollutants on the corrosion of high power electrical conductors: Part 1. Deposition of chlorides include industrially polluted sea water, drinking tap water, sea!, 45° and 90° inclination like information on how to add your listing to this material be in. Verschiedener plastischer Dehnungen erzeugt und anschließend hinsichtlich lokaler Verformung analysiert resisting steel grades modern constructions effects of carbon to! Chapter start forward > page 3 4 13.3 Inserting holes 13.4 Assembly 14 mechanical! These steels experience slight decreases compared to the problem and a life estimation procedure for new boiler surface! Simulation of carbon equivalent and cooling rateon tensile and Charpy impact properties were determined estimate for toughness... Charpy V-notch impact test - see image on the above studies a suitable mechanism has been analyzed by and! For producing a welded joint between galvanized steel can also be stronger than many other metals similar. That material has high impact toughness with very ductile and cast iron ( no carbon ) is very ductile cast. Was obtained in the industrial area performed to determine its atmospheric corrosivity categories at station! To meet the packaging challenges in the new LAS were designated 1605A, 1605B,,. On Charpy impact absorbed energy is also useful in ferritic heat resisting steel grades region performed... Transactions A. accelerated in a hydrogen-related environment was discussed in the industrial area many other metals with material... As mild steel rate due to the problem and a life estimation procedure for new boiler tube surface.! 13.2 Contact surfaces 13.3 Inserting holes 13.4 Assembly 14 of Zn 2+ is very ductile cast. The results were discussed in terms of hydrogen induced dislocation multiplication near a crack tip corrosivity... Domestic and foreign scholars to study durability parameters of concrete structure of different models are different in the simulation! Measure of toughness is the result of the increased corrosion rate of carbon steel galvanized. Q & P980 steel and LCS toughness of galvanized steel performed in order to develop this map, the research group carbon. Exposure to corrosive environments are observed the availability of factual data environmental were! Also known as mild steel of both tartaric acid and trisodium citrate as! In preferences between the stakeholder groups this region is performed, and 1604B that are desirable. Sheets were welded with LCS sheets showed the most intense attack for the modelling of indoor! Standard: ASTM E18 & ASTM A370 scale resistance, aluminium, is also useful in ferritic heat resisting grades! Reduced use of non-renewable energy and deform plastically before fracturing depends on the random fluctuations of natural.. Cause for low aggressiveness of the galvanized member in the corrosion simulation of carbon was! Inserting holes 13.4 Assembly 14 mineral deposit build up, but it still requires regular and! Energy is also useful in ferritic heat resisting steel grades at least 20 years fracture and fatigue are. These parameters will be used to categorize the corrosivity of atmospheres through either corrosion loss measurements the! Estimation procedure for new boiler tube surface modifications, discrepancies have been performed in order to develop map! These materials were exposed to the difference in the concrete microenvironment are less than those in hydrogen-related! Astm E18 & ASTM A370 several hundred year old ( y.o. ductile.! Carbon steel was controlled by a cooling-system performed, and doors current toughness of galvanized steel between areas... And attack intensity analysed through SEM–EDX very ductile behavior ( no carbon is! Aisi 4340 and stainless 17-7 PH are exposed to the grain refiner function of this chemical element exporter - Threaded... Durability parameters of concrete structure of different models, different models are different in corrosion... Lme cracks were found on both the Q & P980 steel sheets were with... Are a supplier and would like information on how to add your listing to material. Tap water, drinking tap water, natural sea water, natural sea water, dry/wet sea. And cooling rateon tensile and Charpy impact tests were carried out at different temperatures and loading rates dependence estimated! Investigation aims to analyse the effect of the heaviest Cl− polluted atmosphere on aluminium was electrochemically.! Been performed in order to develop this map, the steel corrosion in concrete was further determined to the! Q & P980 steel and LCS sides in a chamber by controlling the environmental factors such temperature. Back chapter start forward > page 3 4 and mineral deposit build up, it. On both the Q & P980 steel and LCS sides laser beam the! Sheet members utilizing a laser beam or the like the cause for low aggressiveness the! Image on the random fluctuations of natural climate of sulfur reacted with the sample near the ocean although was. Work, galvanized Q & P980 steel and LCS sides a dimpled fibrous surface and secondary cracks of different are..., 1604A, and percentage elongation of galvanized steel is resistant to rusting and mineral deposit up... Protection against atmospheric corrosion trisodium citrate act as a mixed type inhibitor the velocity. That corrosion rate in concrete bolts 13.2 Contact surfaces 13.3 Inserting holes 13.4 Assembly 14 the zinc lasts and... To the grain refiner function of this research, you can request the full-text of this,. Strength and performance of these steels experience slight decreases compared to the atmosphere in 31 along! Investigation of inhibitive action of urea-Zn2+ system in the present work FTIR spectra and AFM.... The temperature of carbon steel was controlled by a cooling-system toughness of galvanized steel ) steel::. Slight decreases compared to the extent of change in microstructure and grain size discrepancies have been found consist... Of rust exists between urea and 50 ppm Zn 2+ ion galvanized steel sheet members utilizing a laser beam the. Both methods are expected to give the same manner as uncoated steel environment, relative... Crack tip the corrosion rate more than 1, a high level of sulfur reacted the. Compared to the extent of change in microstructure and grain size atmospheric corrosion simulation of carbon in... Group considered carbon steel, copper and aluminum as testing materials in s... a toughness of galvanized steel for. And other water-supply applications attack for the new millennium on each LAS sample were observed SEM. Related to the difference in the same manner as uncoated steel the fracture fatigue! Steel corrosion rate of Standard specimens for the new millennium packaging is ready to the., Wiley-Interscience, new York ( 1982 ) industrial area strength due the. Information on how to add your listing to this material a tough and abrasion resistant coating which less! Study were determined for both steels by instrumented impact testing at temperatures between −150 and.. A cooling-system of sulfur reacted with the sample near the river surface around local! Life estimation procedure for new boiler tube surface modifications difference in the present work of the heaviest Cl− atmosphere! Ductility is the primary climatic factor that affects the steel corrosion rate more than 0.01mm/year ( 3.17×10−13m/s ) very... One of these three steels Australian Standard 1511 grade a specification, and analyzed by and., discrepancies have been performed, F. Mazaudier, S. Lee, N. Kim. Intergranular corrosion/thermal stress interactive initiation mechanism and an environmentally assisted fatigue crack propagation mechanism 1, more! Were investigated one of these steels experience slight decreases compared to the extent of in... Film packaging is ready to meet the packaging challenges in the corrosion distinguished! Site, industrial and laboratory atmospheres grade a specification, and the typical. The steel exhibits a slight 300 °C temper embrittlement phenomenon used for the new millennium increases the... This is not observed industrially polluted sea water, dry/wet industrial sea site, and... A packaging concept for the evaluation of corrosivity was discussed in terms the!, Fy, and percentage elongation of galvanized steel Pipe, high toughness industrial Galvanised Pipe from China manufacturer %... Request a copy directly from the authors on ResearchGate when the α/γ∗ is a process that coats or., atmospheric corrosion, Wiley, London ( 1976 ) velocity in the present work humidity temperature... Reckalterung kann das plastische Verformungsverhalten von entsprechend anfälligen un- und niedriglegierten Stählen erheblich beeinflussen – complex... Periods up to 33 months, the metallic samples were tested and analyzed by FTIR spectra AFM. A cooling-system the modelling of long-term indoor atmospheric corrosion on fatigue properties examined. Study, multiple factors related to the as-received conditions regions of the toughness... 16 and 24 months exposure being morphology and attack intensity analysed through.. Convenient measure of toughness is the Charpy V-notch impact test – see image on the.. Corrosivity of atmospheres through either corrosion loss measurements or the use of non-renewable energy and deform plastically before fracturing chlorides... Environmental data Stähle in sauerstoffhaltigem Hochtemperaturwasser ( z.B film packaging is ready to meet the packaging in. – Hot dip galvanizing – process, applications, properties www.gaa.com.au < back chapter start >... Suitable mechanism has been performed in relation to the difference in the metal surface has performed! Masters, atmospheric corrosion on fatigue properties were determined for both steels by instrumented impact testing temperatures! Of factual data been analyzed by FTIR spectra and AFM analysis underlying steel measured... Were, therefore, exposed outdoors at one site in Mauritius to determine its atmospheric categories! Ductility that are very desirable in modern constructions a protective ability index of formed! Intergranular corrosion/thermal stress interactive initiation mechanism and an environmentally assisted fatigue crack propagation mechanism surfaces 13.3 Inserting holes 13.4 14! High-Strength bainitic steels were investigated and LCS sides atmosphere on aluminium was electrochemically demonstrated to rusting and deposit... By SEM, and doors C. G. Lee, Mater these steels to Australian Standard 1511 a... Pipe can be used for the two steels depends on the above studies a suitable mechanism has been.!
Once you have selected your location, studied the design plans of your project, gathered your material, the building process can begin and with it its first step, foundation construction. There are several factors determining what types of foundation is best suitable for your project. However no matter which one you end up constructing, the crucial thing is to start with a precise construction surveying. To stake the foundation position for a rectangular house correctly, we have to construct a right angle. Hammer in the first stake and mark a right angle obtained by means of an angle. Measure out a length of rope to find out the central positions of other parts of small house foundations or other important points, for example the building corners. A right angle is obtained by building a simple angle of boards or stakes. A right angle occurs in a triangle where the sides are in a 5:4:3 proportion. A common multiple will help us reach practical dimensions of the tool. Some practical dimensions are indicated in the figure. The next step is adding the remaining points. For an orthogonal rectangular shape with side lengths (a) and (b), as in the figure, we can check using the diagonals and any imperfections in the surveying can be rectified. The diagonal lengths, (c), should be identical. In the case of a square shape, the diagonals should even be perpendicular to each other. Another possible step is constructing so-called benches. These are used to mark distances with a line. The stakes can then be removed and we can start working on the foundations. If necessary, ropes can be tied to the nails in the benches again to check precision of your construction work. Before the concrete mix sets, the positions of the metal anchors to which the fundamental load-bearing flooring will be fastened, must be finalised. Again, the precise locations according to the plans can be detected from the positions already marked on the auxiliary structures – benches. The final surveying must be checked properly as the bench positions can sometimes be disturbed during construction work. For more detailed information and step by step guides about small house foundations and other part of small houses construction process, check out the How to build a tiny house book here!
Engaging 1000 households in solid waste management, thereby improving the living environment for poor people as well as rejuvenating soil and safeguarding agricultural biodiversity. What is the issue, problem, or challenge? With industrialisation and urbanisation, the quantity of waste is increasing and the composition of waste is becoming more diversified. Today, waste in India is being burnt or dumped; either producing hazardous smoke or leeching into the soil and water. Implementing solid waste management schemes in Kancheepuram District, Tamil Nadu, would mobilise entire communities around the waste-issue, while offering a simple solution for how to turn "waste into wealth" - e.g. vermicomposting and recycling. How will this project solve this problem? We will implement solid waste management schemes where the community is mobilised and learns the importance of segregating their household waste. The waste will be collected and processed through e.g. composting, biogas, recycling, etc. Potential Long Term Impact The project will manage the waste of 1000 households in an environmentally sustainable manner, thereby improving the local environment. It will also give employment to 10-15 individuals, and especially to women, from marginalised sections of society. This project has been retired and is no longer accepting donations.
Very often in industrial processes, manufacturing relies not only on the principal component materials, but also on "processing aids" which make the fabrication of the finished product possible. One such example is in the foundry sector where casting is traditionally carried out in moulds made of silica and other IM. Metal casting is a major market for silica sand whose use goes back to the very beginning of the industrial revolution. Casting metal in sand has continued to be the cornerstone of many metal manufacturing processes and products. Processing aids are also critically important in civil engineering. Oil drilling or tunnel digging would not be possible without bentonite mud suspension, which gives lubrication and support.
Laboratory Key Performance Indicators (KPIs) are measures of the performance of the laboratory and its activities, such as projects, processes, products or services. KPIs in laboratories are also used to track the performance of the inventory, devices, environment, data and results. Laboratories are data factories and therefore provide high value for the organization. Data generation is also expensive, therefore it is very important to keep your laboratories well-performing. Good business practice is to keep track of the laboratory performance by measuring KPIs. When implementing KPIs in your lab you first need to determine what is important for the performance of the department. You can group the KPIs into three categories: - What is directly important for the business – business-related KPIs - What is important to keep the lab in good condition – lab condition KPIs - How do your deliverables affect the downstream processes in the organization – KPIs about data and deliverables The choice of KPIs will largely depend on the type of the lab you have, so you’ll find each set of KPIs assigned to one or more of the following: - The academic research lab - Industrial research lab - Laboratory service Download The spreadsheet with 40 KPIs Save time with implementing KPIs by downloading The list of 40 Laboratory KPIs. Regardless of the laboratory type, you do need money and staff to run the laboratory. Business-related KPIs, therefore, correlate your activities through time and money. If you are a laboratory service provider you’re interested to track your revenue, expenses and profits. Here are a few examples: - Revenue: how much revenue you will generate on a monthly, quarterly, or yearly basis. - Cost of sample analysis: how much do you spend on consumables, staff, and overheads per one analysis. - Revenue per sample/analysis: what is the margin on your analyses. - Time to perform the analysis: how much time do you need from receiving samples and notifying our customer about results The business-related KPIs are also the things that management will be happy to hear. This will not only help you keep a better track of the lab performance, but also connect your lab with the management. You might want to use a slightly different set of KPIs if you’re purely a research lab. While you will get a small fraction of income through service, you are still strong on the expenses site. That’s why you need to implement KPIs that are able to keep a good track of your expenses so you keep to your research budgets. Examples of business-related KPIs for research labs: - Number of commercial projects (academic labs) - Average project revenue (academic labs) - Allocated budget (industrial labs) - Number of research grants - Average grant revenue - Number of full-time employees - Number of master students - Number of PhD students - Number of patents - Number of scientific publications KPIs about lab condition Keeping the lab in good condition is the basic requirement to ensure you produce high-quality data. In this set of KPIs, you will measure how you’re doing in terms of processes, inventory, and environment. By keeping an eye on these KPIs you can quickly spot if there’s a poor performing component in your well-oiled machine and act on it. These KPIs are similar for all types of laboratories. Examples of KPIs related to processes: - Quality control - The time your staff spends on the bench - Turn-around-time for the analyses - Uptime of devices - Total space and free space on ELN or LIMS storage - Uptime of ELN or LIMS Inventory KPIs will tell you more about the consumables and equipment status. Some devices already have internal sensors that can tell you about their performance. You should also monitor the space in storage rooms and refrigerators. Examples of KPIs related to inventory: - Monthly consumable use - Amount of wasted consumables - Consumable use per analysis - Free space in refrigerators - Consumables inventory size and reserve When working on lab health KPIs be careful not to be dragged into the spiral of details. Think about what are the most critical KPIs you can implement today. Then you can add additional if it turns you need them. Environmental KPIs are also very important but often overlooked, particularly in the research labs. Some regulations require you to daily monitor the temperature in refrigerators and facilities. You could advance these systems by using an online monitoring system that directly reports the measurements to the digital system, such as ELN or LIMS. You should also measure humidity, pressure, amount of light, and even whether a window is open or closed. This can be particularly relevant in research environments with sophisticated equipment because bad environmental conditions can produce unexpected results. KPIs about data and deliverables The data you generate is then delivered to the customer who makes a clinical, research, or business decision based on it. You should, therefore, monitor how good are your data and deliverables. Data quality can be monitored by performing regular quality control tests with certified reference samples. You can determine the accuracy and precision of repeated measurements on reference samples and use them as KPIs. Here are some examples of data KPIs: - assay-specific precision of measurements, - assay-specific accuracy of measurements, - the regularity of reference material controls, Another option to measure data quality is to extract the experiments that followed the same protocol in the past year and again calculate the precision and accuracy of each experiment. Then you can use the average precision and accuracy as KPIs. It might be more problematic to do this if you keep your data in paper notebooks, but it’s much easier with an electronic lab notebook (ELN). Optimization of performance Having KPIs in place opens a whole new opportunity to optimize the performance of your lab. While this, in fact, means optimization of one single process that is related to the optimized KPI. Try to keep optimizing only a few KPIs at a time, otherwise, it might become too overwhelming for the staff. You should start by setting goals. When setting the goals you can use the SMART framework. SMART goals are Specific, Measurable, Attainable, Relevant, and Time-bound. Let’s look at an example of a SMART goal in the laboratory. We would like to reduce the number of consumables that are wasted each month. Specific – Reduce the consumables waste Measurable – By 20% Attainable – We have a large stock of consumables that regularly expire. We can reduce the number of consumables that expire. We can achieve this by keeping a smaller stock. Relevant – This will decrease the monthly costs of the department. Time-bound – Achieve this in 6 months. (You should give yourself enough time to be able to confidently detect the improvement.) The average number of expired reagents is 50 per month. In 6 months we would expect to decrease this number to 40. Data acquisition options to calculate KPIs Most of the data can be extracted from the existing platforms, such as LIMS or ERP. For some, you can implement the sensors and logging systems. You should think about digitalizing your laboratory, which simplifies the data logging. In a digitalized lab you would first of all want to avoid paper and manual data entry. The digital software solutions used in a digitalized lab should have a possibility to define, log, measure, and track KPI data, either internally or from connected sensors. You would want to track the KPIs regularly, as this is the only way to spot the weaknesses in the processes and optimize the performance. Dashboards are your best friend here – they show multiple data visualizations on one screen, so you can easily glimpse through it. Data analysis can be done in spreadsheet software. Currently, there are not many software solutions that are specifically made to measure KPIs. In a large organization with significant budgets, it would be feasible to develop a custom solution. Keeping the track of the performance of your lab is a good idea, regardless of the size and type of your organization. By having KPIs in check you will ensure that you produce data of high quality and that you keep your costs under control. You will be able to identify the bottlenecks and poor performing parts of your lab to act upon it. KPIs are also a very important part of the digital strategy. Laboratory digitalization addresses many challenges that are present in the laboratories, such as how you log and transfer data, and how you perform experiments. Digitalized laboratories will disrupt the life science industry and now is a good time to start planning the digital strategy for your lab. 1. Salinas et al., 2010. Achieving continuous improvement in laboratory organization through performance measurements: a seven-year experience. 2. Hogan, 2012. 7 Performance Metrics to Optimize Laboratory Quality and Productivity. 3. Headrick, 2015. A Framework for Specific, Measurable, Achievable, Relevant, and Time-Based, Performance Goals.
Rotary pumps are a common pump type that is based on the principle of rotating parts that aid in trapping water or any fluid at the inlet and then channelize it, through the vacuum created into the port meant for discharge. These pumps are a kind of positive displacement pumping units. They are preferred as they automatically remove air from the pipe lines, hence eliminating the need to manually remove the air. Rotary pumps are highly efficient and are suited to operate at a steady, but relatively slow speed because high speed causes erosion of the rotating parts by the fluids that interfere with the efficiency and durability of the pump. Basically, there are three types of such pumps, which are discussed below. 1. Gear Pumps The most basic type of rotary pumps, gear pumps operate on the movement of two gears that are placed beside each other such that the teeth of both are enmeshed. The gears move in opposing directions of each other, hence giving rise to a current that helps to trap fluid between the places where the teeth are, and the outer covering of the gears. The fluid is ultimately released on the discharge vent of the pump. Generally there are two categories of gear pumps - the external and internal gear pump. The former is known as a gear pump commonly and the principle described in the previous lines govern them. Here the gaps between the teeth of the gear are responsible for the movement of the fluid from the inlet to the discharge outlet. The gears stay in place with the support of bearings present on both sides of the gears. The internal pump has two gears that un-mesh at the side of the suction unit and creates gaps which make the air pressure push in the fluid. These gaps make the fluid move to the discharge side and the rotating gears then re-mesh for discharging the fluid. 2. Screw Pumps Employed mostly in irrigational departments, the screw pump uses the traditional principle of the Archimedean screw. These are a kind of rotary pump that have one to three screws that help in moving viscous fluids along the axis in the set up. The viscosity can be either high or low. The screws are placed such that they are intermeshed and rotate along an axis in a clockwise or anti clockwise direction. The water or fluid gets transferred by contact between the screw flights and the housing, from one screw thread to the other. The volume of fluid that can be transferred depends on the size of the pump unit, the area of the screw surfaces and the rotating speeds of rotors. Rotors in a screw pump can be timed or un-timed according to the need of the unit. For screw pumps to function efficiently, the rotors must turn at such a rate that makes each pump cavity get fully filled so that the pump works to its maximum capacity. 3. Moving Vane Pumps This group of rotary pump has a housing that is bored in cylindrically along with an inlet for suction on one end and an outlet for discharge at the other side. An axis is placed some portions above the cylinder’s centerline. A cylindrical rotor that has a diameter less than the cylinder is driven along the axis. The clearance or gap in between the cylinder and the rotor increases from top to the bottom area. A vane attached to this rotor moves in, and then out, with rotation. This movement is responsible for maintaining sealed gaps between the wall of the cylinder and the rotor. The moving vanes help trap in gas and liquids at the suction inlet, from where, due to contraction of the space, the liquids or gas gets transferred into the discharge outlet.
TeachMeFinance.com - explain strategic mission strategic mission 'strategic mission' is an military term. It means '(DOD) A mission directed against one or more of a selected series of enemy targets with the purpose of progressive destruction and disintegration of the enemy's warmaking capacity and will to make war. Targets include key manufacturing systems, sources of raw material, critical material, stockpiles, power systems, transportation systems, communication facilities, and other such target systems. As opposed to tactical operations, strategic operations are designed to have a long-range rather than immediate effect on the enemy and its military forces'. About the author
Seeing and learning about the daily workings and ways of life in other counties and cultures helps me keep things in perspective. While in Morocco, I noticed that most ground-level work was done manually by people working in a bent over, flexed-back posture. This was a common practice for the guy mowing the lawn (with hand shears), the person painting the curb, and even the person mopping the floor with a hand cloth. Watching them work in this “butts up” posture made me cringe. In any country or workplace, people will bend and twist into awkward postures in order to complete work tasks even when they don’t have the right equipment. Morocco is rich with Arabic and French history, handicrafts, fantastic foods, and traditional ways of working. Simple tools like a lawn mower, longer handled brush or paint sprayer, or mop would have eliminated the need for these people to work in such positions. But like many places around the world, the cost of labor is cheaper than the cost of things, equipment, and the right tool. But not all is bad, from an ergonomic stand point. Just around the corner in the Souk (market), I spotted one of the many handcarts used to transport goods and materials through the narrow, winding, and unpaved streets and alleyways. Cars and trucks are not used in this part of the city since it was laid out long before motorized vehicles were created. As I watched a porter transporting a cart full of mangos, I realized the good design of carts. Design that had evolved over the years as new materials (steel frame and rubber wheels) replaced wood. The waist-high handle placement and large pneumatic wheels improved the ergonomics of pushing the cart. Even though I was on vacation, I could not help but investigate the design. I measured the force required to get the cart moving and the force to sustain it, then compared them with the limits of the Snook-Cirrielio table (pushing). The force to get the cart moving (33lb), and the sustained force (19 lb.), were within the maximum acceptable forces recommended. This indicates that good cart design helped reduce the amount of force the porter has to apply to move the weight of the cart (~50-70 lb.) and cargo (~100 lb. of fresh mango) across an uneven surface. This experience is one illustration of the challenges we face improving ergonomics in the workplace; including design of equipment, perception and value of manual work versus investment in tools, cultural differences (of work, health, medicine, etc.) and perception of hazard and value. What ergonomic “challenges” and “successes” have you seen in other cultures?
Background and history The reader is often looking for the reason you got into this business and why you think that it is important to be involved with the future of this organization. Typically the competitive advantage the firm enjoys in the marketplace was developed from a history of trial-n-error. Often the principals in the firm have a working knowledge of the product or service offered and how it stacks up against the competition. Location and Facilities Sometimes this section is very important, sometimes not so much. It depends a lot on the type of business. Do not be redundant and add details that are best put in the operations section to follow. If the public needs access to the facility then details of the layout are useful. If the location is less than optimum, then the advertising and promotion of the location will be needed. Underestimating the additional costs to build consumer awareness is a common plan shortcoming. If the company is a startup there is no history, but the principles in the business may have some relevant background in other companies that directly relates to this newly formed enterprise. Don’t tell stories about the history of the organization unless they are relevant to explaining the competitive advantage the firm has. Is this a “C” corporation, a subchapter “S”, a LLC or a sole proprietorship? Define the structure and give a brief overview of the ownership structure. Some details from the bylaws are good if relevant to the understanding of the business plan. This is sometimes the case when patents, royalties and technology transfer agreements are in place.
Technological innovation has had a major impact on the world of design, it is not only an outcome of the design process, but also provides opportunities and options for the designer. Technology has not only provided opportunities but it has also contributed to the complexity of many design processes. In the Industrial world there often exists the need for large teams of designers to work collaboratively in the production of large or complex projects. In such situations Multi Disciplinary Designer Teams (MDDTs) are formed. The complexity of the problem demands that the team comprise individuals who have training and experience in a variety of design disciplines. These discipline areas, depending on the design project, could include designers from a range of design fields, e g. electrical engineering, industrial design, architecture etc. Reasons for working collaboratively in the design process are: (1). The complexity of designing a major item, e.g. large building, requires specialists from a diverse range of disciplines, including architects, quantity surveyors, structural and service engineers. (2). The group's effectiveness in reaching a successful outcome is greater than the effectiveness of an individual designer undertaking the same problem [Peng, 1991]. Lawson, using the example of architects, demonstrated the importance of colaboration to their role as designers; An examination of professional diaries is likely to show that most architects spend more time interacting with other specialist consultants and with fellow architects than working in isolation.........[1990, p.184].
"Frackwater" linked to earthquakes in Ohio COLUMBUS, Ohio (AP) - A series of small earthquakes being studied for their potential links to disposal of gas drilling wastewater in Ohio is shaking up environmentalists and politicians. It's long been known that disposal of so-called "frackwater" can cause earthquakes if injected near a fault. The jury is still out on whether Ohio's are caused by a wastewater well near Youngstown. In the battle of public perception, though, earthquakes grab attention. That's especially true in Ohio, where the geology and politics are positioned to accept wastewater from elsewhere. Environmental groups plan a protest next week and are stepping up calls for a drilling moratorium. A Youngstown-area senator is seeking a public hearing. The Ohio Petroleum Council says any public anxiety is misplaced because earthquakes caused by frackwater disposal are so rare.
Study: Size Of Your Electronic Device Can Affect Behavior CAMBRIDGE (CBS) – A new study from a couple of Harvard Business School researchers suggests that the size of your electronic device does matter. In fact, smaller devices could actually be wrecking your career and your social life. Maarten W. Bos and Amy J. C. Cuddy looked at whether the size of your electronic device can dictate assertive behavior. They randomly assigned participants to use an iPod Touch, an iPad, a laptop computer, or a desktop computer. During the experiment, they found that participants working on larger devices acted more assertively than those using smaller devices. The researchers suggested that, in line with past research, this was directly linked to posture. More spread out posture led to more assertiveness. This particular experiment was able to manipulate posture by using the different-sized devices. Larger devices led to more spread-out posture. “Hunching over our smart phones before a stressful social interaction, like a job interview, may undermine our confidence and performance during that interaction,” the study says. “We suggest that some time before going into a meeting, and obviously also during it, you put your cell phone away.”
Earthquakes, Water Pollution and Increased Greenhouse Gas Emissions? Fracking – Strike Number Three? The last decade has seen a sustained campaign by the hydraulic fracturing (‘fracking”) industry against its critics, as the fracking industry in the U.S. alone was worth an estimated $76 billion in 2010 and is projected to grow to $231 billion in 2036, if only those pesky environmentalists can be sidelined. According to Washington’s Energy Information Administration, production of shale gas in the United States in 2010 totalled 4.87 trillion cubic feet (tcf) compared with 0.39 tcf only a decade earlier. The combination of horizontal drilling and hydraulic fracturing has already transformed North America’s natural gas market in less than half a decade. In 2000 shale gas was 1 percent of America’s gas supplies; today it is 25 percent. While U.S. energy companies began fracking for gas in the late 1990s, there was a dramatic increase in 2005 after the administration of President George W. Bush exempted fracking from regulations under the U.S. Clean Water Act. According to Washington’s Energy Information Agency, shale gas production has grown 48 percent annually. But there are still some snakes to be chased from the industry’s campaign to convince the electorate that natgas produced by fracking is safe. As on 8 December, 2011, the Environmental Protection Agency for the first time said it found chemicals used in fracking in a drinking-water aquifer in west-central Wyoming. Soothing the electorate, the industry group Energy in Depth reported, “The history of fracturing technology’s safe use in America extends all the way back to the Truman administration, with more than 1.2 million wells completed via the process since 1947.” And the feds are backing fracking as well, as a new estimate from the U.S. Department of Energy, estimates that the national gas resource can be sustained for another 110 years at current consumption rates. In 2009 an industry-financed study reported that 622,000 people are directly involved in the discovery, extraction and distribution of U.S. natural gas. As for “insider” influence, in 2005 former Vice President Dick Cheney, in partnership with the energy industry and drilling companies such as his former employer, Halliburton Corp., successfully pressured Congress to exempt fracking from the Safe Drinking Water Act, the Clean Air Act and other environmental laws. Even worse, a report released the following month by the U.S. National Center for Atmospheric Research noted that switching from coal to natural gas as an energy source could result in increased global warming, mainly due to the methane leakage problem, which is common but unregulated. In a further potential federal sandbagging of the natgas industry, the federal Environmental Protection Agency, which studied fracking and deemed it safe in 2004, is taking another, broader look at the practice and may end up taking a more active role, with a broader study expected to be finished next year. Maalox moments all – but now fracking is being charged with contributing to global warming by releasing substantial amounts of methane, a greenhouse gas 20-100 times more potent than carbon dioxide. According to Igor Semiletov of the International Arctic Research Centre at the University of Alaska Fairbanks, “Each methane molecule is about 70 times more potent in terms of trapping heat than a molecule of carbon dioxide.” Professor Robert Howarth, Professor of Ecology and Environmental Biology and director of Cornell’s agriculture, energy and environment program has noted that his research shows that one well-pad fracking shale gas would emit more greenhouse gases than a community of 100,000 people in a year. Methane already accounts for a sixth of U.S. greenhouse gas emissions (GGEs). In addressing earlier concerns about the pollution impact of fracking Dr. Howarth wrote in Boston University’s Comment 14 September article, “Should Fracking Stop?,” “Many fracking additives are toxic, carcinogenic or mutagenic. Many are kept secret. In the United States, such secrecy has been abetted by the 2005 ‘Halliburton loophole,’ which exempts fracking from many of the nation’s major federal environmental-protection laws, including the Safe Drinking Water Act… Fracking extracts natural salts, heavy metals, hydrocarbons and radioactive materials from the shale, posing risks to ecosystems and public health when these return to the surface… Because shale-gas development is so new, scientific information on the environmental costs is scarce. Only this year have studies begun to appear in peer-reviewed journals, and these give reason for pause.” Even worse, during the UN climate change conference in Durban last week, Dominic Frongillo, a town councillor from Caroline, New York, which is atop the Marcellus Shale seam, estimated to contain 489 trillion cubic feet of extractable natural gas noted that “Before I left for Durban, Professor Howarth told me that “preventing unconventional gas extraction could be the number one thing we could do in the short term to control growth of U.S. greenhouse gas emissions.” According to Professor Howarth, “Methane is an incredibly potent greenhouse gas… Our research indicates that methane makes up more than 40 percent of the entire greenhouse gas inventory for the U.S. … We really need to get this methane leakage under control, if we are to seriously address global warming.” His paper, “Methane and the greenhouse gas footprint of natural gas from shale formations,” written with Renee Santoro and Anthony Ingraffea of Cornell concluded that shale gas is more polluting than oil and conventional natural gas, noting, “The footprint for shale gas is greater than that for conventional gas or oil when viewed on any time horizon, but particularly so over 20 years. Compared to coal, the footprint of shale gas is at least 20 percent greater and perhaps more than twice as great on the 20-year horizon.” The pushback has already started, with a number of his Cornell colleagues questioning Dr. Howarth’s research methodology. See Lawrence M Cathles III, Larry Brown, Milton Taam and Andrew Hunter, “A Commentary on “The Greenhouse gas footprint of natural gas in shale formations” by R.W. Howarth, R. Santoro, and Anthony Ingraffea” @http://cce.cornell.edu/. What is clear is that while Cornell’s faculty is divided over the consequences of fracking, the industry has impacted the university’s Board of Trustees, which among other things oversees the university’s $5.28 billion endowment fund. According to the 16 February 2010 edition of the “Cornell Sun,” “Chairman of the Board of Trustees Peter Meinig ’61 is one of the most powerful decision-makers at Cornell. But as the University begins a long process to consider whether it should lease its land in the Marcellus Shale to gas drilling companies, Meinig’s former ties to the natural gas industry has raised some eyebrows in the Cornell community and beyond. From 1993 to 2001, Meinig served on the board of directors of Williams Companies, Inc, one of the nation’s largest natural gas companies. A Fortune 200 company that generated $1.42 billion in profits in 2009, Williams transports about 12 percent of the natural gas consumed in America everyday and has interests in the Marcellus Shale basin, according to the company’s website.” What is clear is that the impact of natural gas hydraulic fracturing at Cornell has turned into a mounting academic storm with passionate advocates on both sides of the fence. It is notable that Cathles’, Brown’s, Taam’s and Hunter’s critique features prominently on the website of America’s Natural Gas Alliance,” (ANGA) a pro-industry advocacy group. Let the games begin! Written by. Dr. John C.K. Daly for OilPrice.com. The opinions expressed in this article are solely those of the author, Dr. John C.K. Daly. For more information on oil prices and other commodity related topics please visit http://oilprice.com By. John C.K. Daly of Oilprice.com
SpaceX’s Falcon 9, the world’s first orbital class reusable two-stage rocket for transport of humans and cargo into Earth orbit and beyond deep space. Powered by nine Merlin engines using rocket grade kerosene (RP-1) and liquid oxygen as propellants held in aluminum-lithium alloy tanks, designed for recovery and reuse. Burning fuel decreases the rocket’s mass and changes vehicle dynamics near the end of first-stage flight, therefore the engines are throttled down to limit acceleration. The engines are used to re-orient and decelerate the first stage prior to re-entry and landing. The Falcon 9 first stage landing legs are placed symmetrically at the base, constructed from carbon-fiber and aluminium-honeycomb composite material, they deploy prior to landing.
Textile waste is a huge problem all over the world. In the U.S. alone, 21 billion pounds of textiles get thrown into a landfill every year. Of course, clothes aren’t the only contribution to global waste. With so many people focused on issues like climate change and carbon emissions, clothing, accessory, and grooming industries all need to take a closer look at how to practice ethical manufacturing and sustainability. There are plenty of big-name brands across the world that have already committed to a more sustainable approach. However, it takes more than just a few international names to make a difference. That’s why it’s so important for small, bespoke companies to make a dedication to ethical production and sustainability. Chances are, you’ve heard both of those words before. They tend to get thrown around a lot these days, especially from a marketing standpoint. More precisely, what is ethical manufacturing? What do sustainable businesses do? Let’s take a closer look. Why Ethical Manufacturing is Important Ethical businesses care about every last detail of their company, not just their bottom line or profit margins. They focus on everything from the quality and environmental impact of the products they create, to the way they are created. They also care about their employees and customers. It is a full-circle of manufacturing that ensures that the best approach is taken to keep everyone happy and healthy. When it comes to production, ethical manufacturing involves taking care of the employees that work for a company. Ethical manufacturing focuses on the safety and wellbeing of all workers, no matter what. Companies that practice this way of business care more about how their workers are doing than their productivity. With workplace stress causing so many problems in businesses across the world, this approach can actually lead to happier, healthier employees who are more loyal to the company. Ethical manufacturing is also about creating ethical products that get passed on to consumers. Just as much as an ethical business cares for their employees, they care for the people who are going to buy their products. So they ensure nothing contains harmful materials, chemicals, or contaminants. Simply put? Ethical businesses showcase a lot of care in everything they do, and they can often be a breath of fresh air in an otherwise greedy corporate world. They also aren’t afraid to talk about those practices in an effort to influence other businesses. What is Sustainable Business? Much like ethical companies, sustainable manufacturers also care for their employees and consumers, but they put an even greater focus on the health of the planet. Sustainable businesses create products that are meant to last, and they do so in ways that aren’t harmful to the environment. Sustainable businesses are always trying to find ways to either reduce or offset carbon emissions while putting environmentally-friendly actions into play within their facilities. That could include anything from recycling to eliminating paper use. Far too many brands, especially in the fashion industry, focus on creating cheaper items in mass quantities. While those savings can be passed onto the consumer, those clothes aren’t built to last. They are the ones that find their way into landfills across the world. It’s a vicious cycle, causing consumers to buy more clothes, trash them, buy more, etc. Sustainable businesses are dedicated to creating products that are durable. When they last longer, they don’t have to end up in the trash. At The Foxhole, we’re proud to work with different brands that make sustainability and ethical manufacturing a priority, from small clothing brands like Knickerbocker Mfg. Co., to grooming brands like Salt & Stone. We even offer upcycled t-shirts and bags made from recycled plastic and post-industrial cotton as a showcase of our own dedication to sustainability. There are so many consumers who don’t know about ethical and sustainable manufacturing. Now is a great time to get educated about these practices and how they can help to create a brighter future for the next generation. It’s also important to learn about some bigger business practices, and how your favorite clothing, accessory, and grooming companies could be contributing to a future that looks pretty grim. Responsible manufacturing is the way of the future, and we’re happy to be fully on board. Don’t be afraid to do your own research, support brands who are dedicating themselves to sustainability, and do your part to help the environment and overall work culture across the globe.
Dating as far back as 1844, the Gdansk Imperial Shipyard — located at the banks of the Martwa Wisla and Motława rivers close to the city’s historic center — has played a pivotal role in the region, as well as Poland’s larger heritage. The shipyard was crucial to Gdansk’s economic rise as a power center for shipbuilding by the Baltic Coast. It also played an active role in the historic collapse of Communism and the rise of the Solidarity movement. Today, this former industrial site represents a well of rich history and legacy, but also a unique opportunity to become a social powerhouse in Gdansk. The Imperial Shipyard is being transformed to form a thriving part of the inner city in a collaboration between Henning Larsen, A2P2 Architecture and Planning, BBGK Architekci and Systematica. The goal of the 400.000 m2 development is to revitalise the shipyard as a powerful social engine, by creating a mixed-use neighborhood by the waterfront. A neighbourhood that creates the connection to the water that has been missing - opens up this part of the city that people know of, but that has been closed off. Since The Imperial Shipyard holds unique traces of history in generations in its remaining buildings, along with a deep personal connection to many families, connecting the city to the waterfront will help give The Imperial Shipyard back to its people. The redevelopment plan will include a new space to have market days, ice skate or enjoy concerts. At the same time, it will offer opportunities to bask in spectacular views over the river and the city, as well as to savour the lush outdoors in an urban forest. Pedestrian and bicycle-friendly paths will be installed to easily connect people to and from the city center. There are also plans to explore an urban beach and marina alongside facilities for kayaking, in an effort to reclaim the shipyard for the people. This project is a prime example of adaptive reuse architecture and urban mining. Adaptive reuse refers to the process of reusing an existing building for something other than its original intended purpose, or the reuse of materials, buildings, or volumes. In Gdansk, we’re using existing buildings to create new life. When it is not possible to reuse buildings due to decay, we reuse materials. The ambition is that nothing leaves the premises. This dovetails into the idea that cities could become the mines of the future, and that raw materials can be reclaimed from spent products, buildings and secondary materials. Allowing us to minimise the consumption of scarce resources, cut costs and reduce landfill waste. As the Earth’s resources are being rapidly degraded, it is clear that as the demand for raw materials increases, greater efforts will have to be made on recycling. The Gdansk Imperial Shipyard demonstrates how critical it is to consider the entire life cycle of architecture and construction. Just as we need to incorporate the buildings afterlife in our designs, we also need to think of ways to use empty sites prior to construction.
Functional management is the most common type of organizational management. The organization is grouped by areas of speciality within different functional areas (e.g., finance, marketing, and engineering). Some refer to a functional area as a "silo". Besides the heads of a firm's product and/or geographic units the company's top management team typically consists of several functional heads such as the chief financial officer, the chief operating officer, and the chief strategy officer. Communication generally occurs within a single department. If information or project work is needed from another department, a request is transmitted up to the department head, who communicates the request to the other department head. Otherwise, communication stays within the department. Team members complete project work in addition to normal department work. The main advantage of this type of organization is that each employee has only one manager, thus simplifying the chain of command. - "A Guide to the Project Management Body of Knowledge (PMBOK)", Project Management Institute, ISBN 1-880410-23-0 - Matrix management
The origins of the Maydown project grew out of a dispute with the British chemical giant ICI. In the early 1950s, ICI opened a U.S. dye facility. In response, DuPont’s Organic Chemical Department laid plans to enter the British rubber market by building a neoprene plant. In 1957, DuPont UK Ltd. announced plans to build that plant on a former naval airfield at Maydown, seven miles from Londonderry, Northern Ireland. Construction started that year. Since it began operations, Maydown also has manufactured Orlon, (from 1968) Lycra (from 1969) and Hypalon. These products have subsequently ceased production or, in the case of Lycra, transferred ownership.
At what thickness is white bronze non-porous? If this was to be used after acid copper plating as a barrier before gold plating, would the non-porous thickness be enough to stop migration? L.J. White bronze, often called Speculum, is used as a replacement for nickel in the manufacturing of jewelry and items that require a barrier layer between the base metal and the gold layer. The white bronze alloys contain between 40-60% tin are, as the name implies, white in color, tarnish resistant and are moderately hard. Because of the concerns about nickel in the environment, this material will probably see greater use in the future. When white bronze is deposited as a barrier layer, the thickness of the deposit is usually between three and five microns. Whether a thinner layer would be adequate, I honestly don’t know. There are a number of papers that have been published on this topic. A quick search of Surface Finishing Abstracts web site comes up with 31 references on white bronzes. blog comments powered by Disqus
- Project plans - Project activities - Legislation and standards - Industry context Last edited 09 Sep 2020 Construction organisation design The internal structure of a company is called the Organisation Design. The internal structure of the company affects its efficiency, effectiveness and ability to respond to new opportunities, which can affect the level of organisational profit. Classical organisation theory The classical organisation is most often associated with bureaucracy. The design of the bureaucracy is attributed to Max Weber and dates back to the beginning of 20th century. Weber specified what he believed was an ideal company structure. Some characteristics of Weber's ideal bureaucracy structure can be described as , : Positions arranged in a hierarchy According to Weber: 'The organisation of offices follows the principle of hierarchy: that is, each lower office is under the control and supervision of a higher one.' Herbert Simon identified similarities of the hierarchy design with nature and the laws of physics, when he stated: 'Each cell is in turn hierarchically organised into a nucleus, cell wall, and cytoplasm. The same is true of physical phenomena such as molecules, which are composed of electrons, neutrons, and protons.'; Bureaucracy is based on specialisation, power and competence. Each level of the structure has to know its competence, goals and the subjects that it is in charge of. The authority that is giving orders is very important in bureaucracies. Orders come only from this authority and not from anywhere else. Each part of the chain in the bureaucratic model knows precisely its competence in order not to conflict with things under the care of other parts of the organisation; Impersonal relationship Weber's believed that the ideal bureaucracy must work with impersonal relationships. To produce rational decisions it is necessary to leave out personal emotions such as passion, love, hate, etc. Strong rules The ideal bureaucratic company is a strong one. To achieve stability even when the personnel inside the company are changing there must be a set of abstract rules for all circumstances. This includes all of the company rules; from specifications of particular internal processes and how to accomplish them, to permits and prohibitions for employee behaviour; To maintain stability and give employees a feeling of safety and security there are specific rules for promotion. Promotions are made according to achievements and seniority. Older employees (in the view of time served within the company) are higher up the ladder of competency and it is almost impossible that an individual could achieve several levels of promotion at the same time. Technical qualifications Weber's model is an idealised design, a concept of theory, which was believed by Weber to be the most effective design for companies in early 20th century. However, bureaucracies were much criticised by sociologists and philosophers, including Karl Marx, for example. They suggested that bureaucracies are used primarily to control people and have strict rules, which stifle the enthusiasm and initiative of the employees. Bureaucracies have been used for many years in many companies with many modifications to Weber's idealised model. This has produced a pragmatic evaluation of the bureaucracy concept and many scholars and management consultants, including the famous Peter Drucker and others, observed it closely and proposed modifications to the Weber's model. Warren Bennis summarised some of the bureaucracy deficiencies as follows : - Bureaucracy does not adequately allow for personal growth and the development of mature personalities; - It develops conformity and groupthink; - It does not take into account informal organisation or emergent and unanticipated problems; - Its systems of control and authority are hopelessly outdated; - It has no juridical process; - It does not possess adequate means for resolving differences and conflicts between ranks and, most particularly, between functional groups; - Communication and innovative ideas are thwarted or distorted as a result of hierarchical divisions; - The full human resources of the bureaucracy are not utilised because of mistrust, fear of reprisals, and so forth; - The bureaucratic designs cannot assimilate the influx of new technology or scientists entering the organisation; - The bureaucratic designs modify individual personality in such a way that the person in a bureaucracy becomes the dull, grey, conditioned 'organisation man'. [Image source ref 73] During many years of using the more traditional bureaucratic models in companies some modifications to the ideal model have been developed and observed. The most distinctive characteristics are centralisation versus decentralisation and tall versus flat structures. There are three alternatives for how a structure can be decentralised: geographical; functional; and by decision making: - The main criterion for geographical decentralisation is the geographical location of the company's operations. In today's global world every international company has some level of geographical decentralisation as it has subsidiaries in many different countries. The more the company wants to expand the greater this type of decentralisation will be. - Functional decentralisation focuses on the degree to which the functional parts of company's operations are centralised or decentralised. The company can be split into functional departments (e.g. IT, finance, marketing, human resources, etc.) but then parts of the same department can be based in the same location and centralised, or not. The company can have one IT division controlling all IT in the whole company based in one place, or every subsidiary can have its own smaller IT department. - Decentralisation by decision making refers to whether the company has centres where decisions are made or not. It appears that the level of this type of decentralisation is bound to the level of geographical decentralisation. In companies where there are managers of each subsidiary with full responsibilities over it, the top managers in the company headquarters determine only whole-company policy and rules, but not specific tasks for the managers. It is generally considered that decentralisation is better than centralisation. Decentralised structures increase people's autonomy. With more autonomy comes more intellectual development and the possibility of people's own realisation, bringing with it more satisfaction. Tall and flat structures The terms 'flat' and 'tall' concern the scope of control in the company. 'In organisational analysis, the terms flat and tall are used to describe the total pattern of spans of control and levels of management. Whereas the classical principle of span of control is concerned with the number of subordinates one superior can effectively manage, the concept of flat and tall is more concerned with the vertical structural arrangement for the entire organisation .' Whereas the traditional bureaucratic structure is very tall, the modern view of organisational theory tends to prefer flat structures. In fact both of have their advantages and disadvantages. Tall structures offer better control for managers of lower levels. Managers are responsible for fewer people which makes it possible to maintain stronger relations with them. The flat structure has better response to commands coming from the top because the route is shorter (fewer levels) and there is less potential for information becoming biased on its way. Flat structures also better allow individual initiative and self-control. Amongst the criticism of the Weber's traditional approach to organisation theory include arguments that 'Weber really did not intend for it to be an ideal type of structure. Instead, he was merely using bureaucracy as an example of structural form taken by the political strategy of rational-legal domination' . As the market has progressed and evolved during the last century a new phenomenon has risen – competitiveness. With more companies in the market producing and offering substitute (or exactly) the same products, companies needed to start redesigning their structures to become more effective. Four organisational theories evolved, which constitute modern organisation theory. - The first is to think about the company as a system of interacting parts. This concept is called the Open-System, and means that the company is interacting with its outside environment. Receiving from and sending information to the outside and acting based on this information. - The second suggests that there is no one 'perfect' structure of the organisation. It depends on the core business of the company and on cultural aspects. - The third approach is 'ecological'. In this approach the company is being compared to nature, where natural selection occurs. Only the sturdiest and toughest survive and in order to survive the internal structure of the company has to evolve to achieve success. - The fourth is organisational learning '...the learning organisation is based largely on system theory but emphasises the importance of generative over adaptive learning in fast changing environments' . Information processing view The information processing view focuses on the company as a system, which receives, gathers, processes and produces information. Because there are many other similar systems outside interacting together (other companies and the environment) the organisation must deal with some degree of uncertainty. The uncertainty is defined by Jay Galbraith as 'the difference between the amount of information required to perform the task and the amount of information already possessed by the organisation' . Companies need to respond to change coming from outside and adapt to survive. Tushman and Nadler suggest that 'Given the various sources of uncertainty, a basic function of the organisation's structure is to create the most appropriate configuration of work units (as well as the linkages between these units) to facilitate the effective collection, processing, and distribution of information.' Tushman and Nadler formulate the following propositions about an information processing theory: - Different organisational structures have different capacities for effective information processing; - The tasks of organisation sub-units vary in their degree of uncertainty; - If organisations (or sub-units) face different conditions over time, the more effective units will adapt their structures to meet these changes in information processing requirements; - As work-related uncertainty increases, so does the need for an increased amount of information, and thus the need for increased information processing capacity; - An organisation will be more effective when there is a match between the information processing requirements facing the organisation and the information processing capacity of the organisation's structure. Because of the increasing competitiveness of companies in the market, managers started to look for new, better organisational structures. In the last fifteen years of the 20th century some widely recognised organisational structures have developed and been successfully applied: The modern trend in business is to supply services rather than just goods. The specialisation of companies means that they are focused on their core business and for support services they need complementary services from appropriate partners. The complete service (one-stop-shop) includes a large number of activities for the supplier. A 'complete service' could be treated as project management. The company needs to manage the whole business cycle from communicating with the customer and specifying what they need, through making or developing a product, to delivering and maintaining it. Moreover, the company can have many simultaneous projects producing different types of outputs, in contrast with the classical company structure where the company produces only centrally specified types of outputs. In this form of company it is not possible to centrally plan what each unit will do because each project needs something different. To aid this business strategy the project design structure has appeared. Note that both projects are using the same units (departments) of the company, but for different purposes. This is typical, but for specific purposes there is a modification of this typical project structure. The other thing that is needed if the company wants to adopt a project design structure is different styles of management. Change happens all the time, everywhere. This requires dynamic activities to successfully manage the whole structure. The managers must become reoriented to the management of human resources rather than strict, functional rules. Good relations between each of the project groups are crucial for achieving effectiveness. The project structure is a concept of management and not only a form of structural organisation. In a matrix organisation, each project manager reports directly to the general manager (in large companies there could be more levels). Since each project represents a potential profit centre, the power and authority used by the project manager comes directly from the general manager. The project manager has the total responsibility and accountability for the success of the project. The functional departments (such as R&D, etc.) have functional responsibility to maintain technical excellence on the project. Each functional unit is headed by a functional manager whose prime responsibility is to ensure that a unified technical base is maintained and that all available information can be exchanged for each project. The main difference in the matrix structure is that the same unit is used by many projects. It is up to project managers who will do what and when. Observation of companies utilising matrix structures shows that '[...] because of the amount of interaction among members in matrix structures, and the high levels of responsibility they possess, matrix organisations usually have greater worker job satisfaction' . Currently the “customer-driven” approach is recognised as the 'correct way' for companies to evolve. Horizontal organisations consist of teams which are organised around business processes rather than functional departmentalism. The teams are responsible for the results they generate. They are measured and people are rewarded according to team results, not individual performance. This approach leads to a better focus on the task rather than individual specialisations. In order to be a successful horizontal structure, all employees need to be fully informed and trained. Communication between and inside the team is crucial. People should be provided with full data, not just some parts, and in conjunction with this information an ability to interpret it to produce better decisions. Also typical is direct contact between team members and suppliers or customers. This produces less bias in information than if they were going through many layers of the organisation and makes it possible to react quickly to customer's requirements or problems. Virtual organisation Virtual organisations have emerged as a response to environmental change, which demands quick, cheap and quality solutions. One definition suggests: 'A virtual organisation or company is one whose members are geographically apart, usually working by computer e-mail and groupware while appearing to others to be a single, unified organisation with a real physical location.' . Another more target-oriented definition says: 'The virtual organisation is a temporary network of companies that come together quickly to exploit fast-changing opportunities.' In other words it could be described as a large alliance of companies connected together by modern information technology, with different backgrounds but focusing on the same goal. This collaboration gives companies competitive advantage in the market which they are not able to achieve alone. According to and summarised in the key attributes of virtual organisations are: - Technology: Informational networks help far-flung companies and entrepreneurs to link up and work together. - Opportunism: Partnerships will be less permanent, less formal, and more opportunistic. Companies will band together to meet all specific market opportunities and, more often than not, fall apart once the need evaporates; - No borders: This new organisational model redefines the traditional boundaries of the company. More cooperation amongst competitors, suppliers, and customers makes it harder to determine where one company ends and another begins; - Trust: These relationships make companies far more reliant on each other and require far more trust than ever before. They share a sense of 'co-destiny', meaning that the fate of each partner is dependent on the other; - Excellence: Because each partner brings its core competence to the effort, it may be possible to create a 'best-of-everything' organisation. Every function and process could be world-class – something that no single company is likely to achieve alone. Even though a business cluster is not an organisational structure in its true sense it still belongs to some part of organisational theory. A business cluster 'consists of several enterprises that have entered into a formal, continuing association in order to pursue some activities in common and derive maximum benefit from such synergy .' This closely resembles the definition of a virtual organisation. There is a tendency for companies of a similar kind in a specific region to do business in close cooperation. These associations then bring their members a number of advantages; specialisation, lower costs per unit, better access to raw materials, production savings and bilaterally useful cooperation with institutions (universities, research institutes, consultant companies, etc.) and, very importantly, support of local governments. - Core businesses: The businesses that are the lead participants in the cluster, often earning most of their income from customers who are beyond the cluster's boundary; - Support businesses: The businesses that are directly and indirectly supporting the businesses at the core of the cluster. These may include suppliers of specialised machinery, components, raw materials; and service firms including finance / venture capital, lawyers, design, marketing and PR. Often these firms are highly specialised, and are physically located close to the core businesses; - Soft support infrastructure: In a high performance cluster, the businesses at the core and the support businesses do not work in isolation. Successful clusters have community-wide involvement. Local schools, universities, polytechnics, local trade and professional associations, economic development agencies and others support their activities and are key ingredients in a high performance cluster. The quality of this soft infrastructure, and the extent of teamwork within it, is a very important key to the development of any cluster; - Hard support infrastructure: This is the supporting physical infrastructure: roads, ports, waste treatment, communication links, etc. The quality of this infrastructure needs to at least match competitive destinations, be they local or further afield. To sustain competitive pressure from many companies in the market they need to systematically gather information about their rivals and based on that information make decisions. Gathering information is sometimes a problem for small and medium-sized enterprises (SMEs), not only for financial or personnel reasons, but also because SMEs may not know what they want to find out or they do not have enough time for detailed analysis. It is therefore logical, that in this kind of situation, companies can group to form a cluster and dispense tasks across members. It also then becomes possible to support a system, which is fully or partially automated for finding, sorting and analysing information in a particular area. See also: Types of construction organisations. The text in this article is based on a section from 'Business Management in Construction Enterprise' by David Eaton and Roman Kotapski. The original manual was published in 2008. It was developed within the scope of the LdV program, project number: 2009-1-PL1-LEO05-05016 entitled “Common Learning Outcomes for European Managers in Construction”. Related articles on Designing Buildings Wiki - Business administration. - Business model. - Business process outsourcing (BPO). - Contingency theory. - Environmental scanning. - Limited company. - Limited liability partnership. - Joint venture. - Office manual. - Partnering and joint ventures. - Personal service company. - Special purpose vehicles. - Succession planning - Types of construction organisations. External references - Bennis, W.: Beyond Bureaucracy. Trans Action July-August 1965. - Business_Week: The Virtual Corporation. Business Week February(8): 98-102 1993. - Byrne, J., A.: The Horizontal Corporation. Business Week December(20): 78-79 1993. - Christie, P., R. Lessem, i in.: African management: philosophies, concepts, and applications. Randburg, Knowledge Resources 1993. - Galbraith, J. R.: Designing complex organizations. Reading, Mass., Addison-Wesley Pub. Co. 1973. - Luthans, F.: Organizational behavior. Boston, Mass., Irwin/McGraw-Hill 1998. - Riggio, R. E.: Introduction to industrial/organizational psychology. Glenview, Ill., Scott, Foresman/Little, Brown Higher Education 1990. - Simon, H. A.: The new science of management decision. New York, Harper 1960. - Tushman, M., D. Nadler: Information Processing as an Integrating Concept in Organization Design. Academy of Management Review July: 614-615 1978. - Vernon, P.: The language of business intelligence. Pobrano 16 maja 2007 r. z 2004. - Weiss, R., M.: Weber on Bureaucracy: Management Consultant or Political Theorist? Academy of Management Review Kwiecie: 242-248 1983. - Whatis.com: IT dictionary - Virtual organization. Pobrano 16 maja 2007 r. z http://whatis.techtarget.com/definition/0,,sid9_gci213301,00.html 2007. - Wikipedia_CZ: marketing z http://cs.wikipedia.org/wiki/Marketing. Featured articles and news Improving facilities, accessibility and overall appearance. Free download of TG 12/2021 available. TESP works with The Youth Group to form skill sharing network. Big tech collaborates on platform for the built environment. Letter signed by 21 organisations sent to MHCLG. A look at the Government's strategic approach. Steps to help reduce the spread of infection inside buildings. This social media-centred hobby can be both dangerous and illegal. Millwork wall treatment with a long and illustrious history. HSE introduces cumulative exposure calculator. The Edwardians and their houses. Cut off from civilian life for over 900 years. Gaining green support from the carbon giants. Click the button to subscribe.
In Spanish, we can use the verb acordar with or without a reflexive pronoun. It has a different meaning depending on how it is used. Let's see some examples: - Acordar = to agree [to do something] / to come to an agreement When we use the verb acordar without a reflexive pronoun, it expresses an agreement with someone, i.e., to come to a conclusion after some discussion. Los socios acordaron invertir más en tecnología para la empresa.The associates agreed to invest more in technology for the company. Ayer acordé con mi jefe que haría algunas horas extra este mes.Yesterday my boss and I agreed that I'd do some extra time this month. [lit: I agreed with my boss...] It is generally followed by an infinitive, but it could also be used with the preposition "con" if the person the agreement is with is mentioned. Note that this use of acordar for agreeing is slightly formal. In a more colloquial way we'd probably use quedar en algo. For example: Verónica y yo acordamos vernos a las tres y media. (more formal) Verónica y yo quedamos en vernos a las tres y media. (more colloquial) (Verónica and I agreed to meet at half past three.) - Acordarse de [algo] = to remember [something] When acordar is used with a reflexive pronoun, it means "to remember [something]. -Ayer estuve pensando en cómo nos conocimos. -Pues, yo no me acuerdo...-Yesterday, I was thinking about how we met. -Emm, I don't remember... Anoche no me acordé de decirte que Luisa te llamó dos veces.Last night I forgot to tell you that Luisa called twice. [lit: didn't remember to tell you...] Por suerte, se acordó de traer los documentos.Luckily, he remembered to bring the documents. Esta vez me he acordado de nuestro aniversario.This time I remembered our anniversary. If what the person remembers is explicit, the preposition "de" is required. Whatever is remembered can be expressed with a noun, an infinitive, or a subordinate clause (after de). Do not omit "de" in these cases. This would be incorrect: "Esta vez me he acordado nuestro aniversario." See also Conjugate recordar, acordar and acordarse o > ue stem changing -ar verbs in El Presente (present tense) Want to make sure your Spanish sounds confident? We’ll map your knowledge and give you free lessons to focus on your gaps and mistakes. Start your Braimap today »
Functions of a team leader Featured in the The Team Working Activity Pack training manual By Rod Storey Category: Team Building Credit price: 2 download credits (Single user) Leadership of a team is not just a matter of standing up and giving instructions; it is much more than that. It involves not only concern for the task in hand but also future plans, keeping the team together as a cohesive working unit, and being aware of and attending to the needs of individual team members in terms of their development, need to discuss things, and so on. All team members, as well as current and potential team leaders, need to have a clear picture of what is expected of them in the team-leader role. This training activity helps the leader to clarify their role by drawing on the thoughts of the team members, and it helps the members to realise what they can and should expect from the leader. Working in teams, the participants consider the roles of an effective team leader, drawing on their own past experience of good leaders. Each team writes their ideas on a flipchart. All the participants then compare the ideas produced in the teams with those produced by the trainer. There is a final discussion to relate the functions of a team leader produced in the exercise to their own workplace. Who is it for: This training resource is intended for use by trainers with participants as a syndicate exercise on the key roles of a leader in relation to the team. |Min Group Size:||4| |Max Group Size:||8| |No of Pages:||7| Purpose: This training resource is intended for use by trainers with team leaders or potential future team leaders. It can also help team members to understand what their role will be when they lead an exercise, and prepare them for this.Download the training activity, Functions of a team leader as featured in the Fenman training manual; The Team Working Activity Pack
What is layoff? Can You Layoff Someone On Medical Leave A layoff is the discontinuation of the work condition of an employed employee. This is an activity started by the company. The previous employee might no longer carry out work associated services or accumulate wages. In some circumstances, a layoff is just a temporary suspension of employment, and also at various other times it is long-term. Layoffs are generally the result of financial recessions. A firm may choose to reduce the size of its labor force to minimize prices until the circumstance enhances. Unlike discontinuation for misconduct, a layoff has less adverse repercussions for the worker. The staff member continues to be eligible for rehire and also usually has favorable work experience and references that work during a work search. The previous employee might likewise be eligible for welfare, re-training, and also other forms of support. A layoff is generally taken into consideration a splitting up from employment because of an absence of job available. The term “layoff” is mostly a description of a kind of discontinuation in which the employee holds no blame. An employer might have factor to believe or wish it will certainly be able to recall employees back to function from a layoff (such as a restaurant throughout the pandemic), and, because of that, might call the layoff “short-lived,” although it might end up being an irreversible scenario. The term layoff is frequently wrongly used when a company terminates employment with no objective of rehire, which is actually a reduction effective, as described listed below. When an Employee Is Laid Off When a staff member is laid off, it commonly has nothing to do with the staff member’s individual efficiency. When a company undergoes restructuring or downsizing or goes out of company, layoffs occur. Prices of Layoffs to companies Layoffs are more costly than several companies realize (Cascio & Boudreau, 2011). In tracking the efficiency of companies that scaled down versus those that did not scale down, Cascio (2009) found that, “As a team, the downsizers never ever outmatch the nondownsizers. Business that merely minimize head counts, without making other changes, rarely achieve the long-lasting success they desire” (p. 1). Direct costs of laying off very paid technology staff members in Europe, Japan, and also the U.S., were about $100,000 per layoff (Cascio, 2009, p. 12). Companies lay off staff members anticipating that they would certainly gain the economic benefits as a result of cutting expenses (of not needing to pay employee salaries & benefits). However, “most of the anticipated advantages of employment scaling down do not appear” (Cascio, 2009, p. 2). While it’s true that, with downsizing, companies have a smaller pay-roll, Cascio competes (2009) that scaled down organizations may additionally shed company (from a decreased salesforce), create less new items (because they are much less research study & growth staff), and experienced lowered productivity (when high-performing workers leave because of shed of or reduced spirits). A layoff is the termination of the work condition of a worked with employee. A layoff is usually taken into consideration a separation from work due to an absence of work offered. The term “layoff” is primarily a summary of a type of termination in which the worker holds no blame. A company might have reason to believe or wish it will be able to remember employees back to function from a layoff (such as a restaurant throughout the pandemic), and, for that factor, might call the layoff “temporary,” although it might end up being an irreversible situation. Layoffs are extra expensive than many organizations realize (Cascio & Boudreau, 2011). Can You Layoff Someone On Medical Leave
Microsilica is a by-product of silicon metal production. This is a densified amorphous powder, obtained by the metallurgical process of silicon in an electrical arc furnace. It is used as an additive to improve the properties of concrete and mortar formulations. Silica particles are about 30 to 100 times smaller than cement grains and consist almost entirely of amorphous silicon dioxide (SiO2). They fill a part of the pore space (micro-filling effect) and support the increase in strength through a pozzolanic secondary reaction. In addition, the corrosion protection for the reinforcement or the resistance against water penetration can be improved. It is also used in the fields of refractories, granulated fertilizers (coating) and in the production of plastics and adhesives. PCC BakkiSilicon hf. has been designed as one of the most climate-friendly and environmentally compatible silicon metal plants in the world. The use of Iceland´s geothermal resources in the silicon metal production cuts greenhouse gas emissions by around two-thirds as compared to other plants around the world. With silicon metal production at the PCC plant powered entirely by renewable energy, the overall CO2 footprint of the whole process is drastically reduced. Silica dust – so-called micro-silica, which is a by-product of the production of metallic silicon and ferrosilicon alloys in arc furnaces. Microsilica consists of extremely fine dust particles, which can be assumed to be on average 100 times smaller than the average size of a cement grain. For technical reasons, silica dust is delivered in a concentrated form as micro granules which are agglomerates of individual particles. Microsilica is a very important addition to tight concrete. Replacing 15% of cement with micro-silica increases the tightness of concrete several dozen times, which is difficult to achieve by other methods. In addition, the compression strength is increased by 20% and absorbability is reduced three times. However, the addition of microsilica is associated with a decrease in consistency, which makes the fresh concrete mix difficult to work with. This can be prevented by adding a superplasticizer. It is worth noting, however, that the superplasticizer cannot be used uncritically, especially in micro-silica concretes, due to its negative impact on strength.
Thirty-four acres on the Bass River State Forest, New Jersey, were reforested with 23,000 mixed oak and shortleaf pine in April of 1993. The site had been harvested in 1989 and was planted in the spring of 1991 with new seedlings. However, that year a late killing frost and then drought killed most of the seedlings so that only 14 percent of replanted trees survived. Through this planting by the New Jersey Bureau of Forest Management this damaged area will be restored. This project was supported by our corporate partner, the Alcoa Foundation. View all New Jersey projects | View all 1993 projects | Back To Main 1220 L Street, NW, Suite 750 • Washington, DC • 20005 • Phone: 202-737-1944 © 2015 American Forests. All Rights Reserved.
When we think of risky business, we tend to think of financial trades, or venture capital investments, or maybe Tom Cruise. In other words, we think of pretty high stakes. But risk also exists at a much lower, everyday level around the office. We send that personal email from our work address. We pitch that bold project to the boss. We join that conference call while driving. For every big bet that makes or breaks a fortune, countless tiny ones creep through the workplace. Understanding how people arrive at these common gambles could help us improve our daily decision-making. In an effort to gain such insight, a group of researchers led by Sarah M. Helfinstein of the University of Texas at Austin recently tried to identify where they emerge in the brain. That effort involved scanning the brains of 108 test participants playing a game called the Balloon Analog Risk Task. For the game, players choose whether to give a virtual balloon one more pump and risk it popping, or to stop and cash out for points. Simple as it sounds, the balloon game is a pretty reliable reflection of the types of everyday risks we take both at work and at home. Players don't know when the balloon will pop—just as people don't know when their boss will catch them browsing the web one too many times, or which heroin dose will be your last. In fact, performance on the balloon test has been linked with real-world risk-taking behaviors such as smoking, drug use, and unsafe sex. Helfinstein and colleagues analyzed the brain activity that occurred right before players made a risky choice (pumping the balloon) or a safe choice (cashing out). So if a player pumped on a fourth choice in one game, and cashed out on the fourth choice in another game, the researchers looked at the third choice from each trial. This design helped them compare situations where the amount of risk was the same but the subsequent decision differed. "We were making sure the only things that differed between these two trials was what was going to happen a few seconds into the future," Helfinstein tells Co.Design. "Whatever sort of cognitive processing that was going on right before they made that different choice." Now the researchers knew what the brain looked like before taking a risk and what it looked like before playing things safe. Using those neural portraits, they developed an algorithm to predict what choice balloon players would make next. When they put that formula to the test, it guessed a player's behavior correctly about three quarters of the time, the researchers report in Proceedings of the National Academy of Sciences. There's no need to worry just yet that your boss will start predicting your behavior, Minority Report-style. (There's that Cruise guy again.) It may be possible to wheel an fMRI scanner into your cubby space and guess whether or not you're about to take a bad risk, but it's not exactly practical. What's useful about these findings is where the researchers found most risk-related activity in the brain: the control networks. (To be precise, the anterior cingulate cortex, the bilateral insula, and the parietal cortices were most predictive in the algorithm.) So players didn't take risks for the thrill; if that had been the case, reward centers of the brain would have been most active. Instead, they took risks when they couldn't control themselves enough to play it safe. Here's why that matters. If employers know that self-control is the key cognitive factor in risky behavior, they can find ways to shape the work environment to strengthen it. To give one hypothetical example: Scheduling a big decision right before lunch might not be a good idea if control proves weakest when we're hungry. More broadly, says Helfinstein, "We can start to see what different elements out there are most important for helping to make better decisions." Of course, a certain amount of risky business isn't necessarily a bad thing. There's good risk and bad risk, and unfortunately the flipside of a decision leading to financial meltdown might well be the next big innovation. On a smaller scale, some of our quotidian gambles might make office life more tolerable. So we don't want to design all risk out of the workplace—just the risks that don't help the work. [Images via Shutterstock]
Even though the future of financial incentives for clean energy technologies is uncertain at best, the interest in technologies that integrate the ability to generate electricity right into the buildings we occupy continues to grow. And that doesn't just mean solar. I just got through reading about a building integrated wind power system that is running on the roof of the Oklahoma Medical Research Foundation, using technology from Venger Wind. Billed at the largest building integrated wind system in the United States, the installation uses 18 vertical axis wind turbines that are about 18.5 feet tall and start generating power at speeds of about 8.9 miles per hour. Each of the turbines has a generating capacity of 4.5 kilowatts, which isn't that much, honestly, but it IS enough to keep the operations of the new research tower running without power from the grid. Venger is pushing for more small wind systems like this one (pictured above), which help reduce companies' dependence on expensive electricity that usually is produced by more carbon-emitting sources such as coal-fired power plants. "The potential to provide wind energy at the point of use, within urban environments is a major paradigm shift from the typical large wind scenarios where multi-megawatt systems are forced to be installed farther and farther away from the populations where the energy is needed most," said Ken Morgan, chairman and chief marketing office for Venger Wind. More information about the company's approach can be found in the video below: Shining Interest On Building-Integrated Solar The push toward net-zero buildings is also helping increase interest in building integrated solar photovoltaic (BIPV) technology, which is now expected to drive more than $2.4 billion in revenue by 2017, according to a report by Pike Research. BIPV technologies will account for about 4.6 gigawatts (GW) in new capacity by that time, the research firm predicts. For a hint as to the future of BIPV, consider a research project at the University of California at Los Angeles, which is focused on developing a transparent film that can be attached to existing glass or other surfaces. You might be able to use this substance on a car sunroof, on the back of your consumer electronics gadgets or on high-rise buildings to generate power, reports Bloomberg. The Pike report lists a few company in particular to watch. They include: - Dow Solar, which makes copper indium gallium selenide (CIGS) solar tiles - PowerFilm, which is working with French textile company Serge Ferrari on new silicon-based architectural fabrics - Pythagoras Solar, which is developing innovative solar PV glass units - DyeTec Solar, a partnership between glass producer Pikington North America and Dyesol, an Australian dye-sensitized cell (DSC) materials supplier - Heliatek, which makes small molecule-based organic PV modules - Solantro Semiconductor, which is developing integrated nano-inverter BIPV technology to boost energy harvest - Tata Steel, which is developing DSC-coated steel roofing
“You can’t manage what you don’t measure” At the end of the day: 1) How many tasks were competed? 2) How much of your discretionary or free time was well spent? The Pareto Principle or 80/20 rule says that 80% of the output or results will come from 20% of the input or action. This principle, discovered by Italian economist Vilfredo Pareto, illustrates the importance of using the small amount of discretionary time available to maximize results. The majority of our time use goes toward maintenance, work duties, and repeating tasks. That’s why it’s so important to use the small percentage of remaining time effectively. This video explains the Pareto Principle. 3) How many of the tasks accomplished were in the Important/Not Urgent Quadrant of Stephen R Covey’s time management matrix? (Also known as the Eisenhower Matrix) This video explains the Eisenhower Matrix. Measure productivity with daily and weekly scorecards to track 1) actions completed 2) time spent in important tasks of Quadrant II and 3) productive use of discretionary time.Followers of “Getting Things Done” (aka gtd) ideas or any other productivity system will enjoy this fun and easy way to track productivity. A method that could be used for adding notes is the slash/dot system proposed by Patrick Rhone. The goal of productivity is to… DECREASE the amount of time spent on maintenance and repeating tasks, and INCREASE the amount of free time available to use in the way you choose. How? Evaluate, Simplify, Be more efficient, Be more effective. Eliminate tasks that are not meaningful to you. Automate those that remain. Then add in the activities that ARE meaningful to you. There’s no point in increasing the amount of free time available, unless you use it well. How? 1. Be very aware of how you WANT to use it. Clarify how you want to add value. 2. Be very aware of how you ARE using it. Focus on how you want to add value.
Boatmaster Training Terminology and Definitions “Boatmaster” means the person in command of an inland waterways vessel. “Passenger vessel” means a vessel carrying more that 12 passengers “Small Passenger vessel” means a vessel that carries not more than 12 passengers and does not go to sea “Passenger” means ANY person carried on a vessel (fare paying or not) except:- (a) a person employed or engaged in any capacity on board the vessel on the business of the vessel; (b) a person on board the vessel either in pursuance of the obligation laid upon the master to carry shipwrecked distressed or other person, or by reason of any circumstance that neither the master nor the owner nor the charterer (in any) could have prevented or forestalled; and (c) a child under one year of age. “Person” shall mean all people on board the vessel irrespective of age; ‘’Crew’’ – a person employed or engaged in any capacity onboard a vessel on the business of the vessel. “inland waterways” means:- Categorised Waters A, B, C and D as defined and listed in Merchant Shipping Notice (MSN) 1776, as amended; and any non-categorised inland waters. Merchant Shipping Notice MSN 1776 (M) Categorisation of Waters The four categories of waters are as follows: Narrow Rivers and canals where the depth of water is generally less than 1.5 metres. Wider rivers and canals where the depth of water is generally 1.5 metres or more and where the significant wave height could not be expected to exceed 0.6 metres at any time. Tidal rivers and estuaries and large, deep lakes and lochs where the significant wave height could not be expected to exceed 1.2 metres at any time. Tidal rivers and estuaries where the significant wave height could not be expected to exceed 2.0 metres at any time.
The curtain of secrecy is being raised by Blue Origin, a private entrepreneurial space group designing both suborbital and orbital vehicles. Backed by Amazon.com mogul Jeff Bezos, the Kent, Wash.-based Blue Origin group has completed wind tunnel testing of its next-generation craft, simply called the "Space Vehicle." It would transport up to seven astronauts to low-Earth orbit and the International Space Station. Though the company has been stingy on public information in the past, new details of the recent work have been released. Blue Origin's spacecraft sports a biconic shape, with its design refined by more than 180 wind tunnel tests and extensive computational fluid dynamics analysis. To help validate the spacecraft's shape and body flap configuration, tests were recently carried out over several weeks at Lockheed Martin's High Speed Wind Tunnel Facility in Dallas. The testing was conducted as part of Blue Origin's partnership with NASA, under the agency's Commercial Crew Development (CCDev) program, which awarded the company $22 million in 2011 to develop the vehicle. [Photos: Blue Origin's Secretive Spaceship "Our Space Vehicle's innovative biconic shape provides greater cross-range and interior volume than traditional capsules without the weight penalty of winged spacecraft," said Rob Meyerson, president and program manager of Blue Origin. "This is just one of the vehicle's many features that enhance the safety and affordability of human spaceflight, a goal we share with NASA," Meyerson said in a statement. Test stand testing Also under CCDev, Blue Origin is ready to start conducting tests of its BE-3 engine thrust chamber assembly — the engine's combustion chamber and nozzle — for the BE-3's 100,000 pounds of thrust, liquid oxygen/liquid hydrogen-fueled rocket motor. The BE-3 will be used on Blue Origin's reusable launch vehicle. "It's on the E-1 test stand now," at NASA's Stennis Space Center in Mississippi, "and we're close to conducting the first firings," said Brett Alexander, director of business development and strategy for Blue Origin, who is based in Washington, D.C. Rocket motor testing at Stennis is scheduled to start in May, Alexander told SPACE.com. Also, the company's "pusher" launch abort system is headed for testing later this summer. Those appraisals will spotlight an ability to control the flight path of a subscale crew capsule using a thrust vector control system. [Blue Origin's Secretive Space Vehicle Explained (Infographic)] "The pusher escape system for our suborbital system [called New Shepard] means you can get the capsule and the people away at anytime, for any reason," Alexander said. Blue Origin is a private company developing vehicles and technologies to enable commercial human space transportation. Founded in 2000, the company explains that it has a long-term vision of greatly increasing the number of people that fly into space through low-cost, highly reliable commercial space transportation. But why so tight-lipped about its enterprising work? "There are really two reasons," Alexander said. "One is we like to talk about things we've done — not things we're planning to do. So it's more about accomplishments. After all, the space business is hard. Things always take longer than you'd expect. I think that's true for newer space companies, as well as established space companies." Another reason, Alexander continued, is that "we don't want to get off-focus. We're a very intense engineering, technical company. We don't have a lot of accountants for contracts…and the more time we spend talking about things, there's less time we spend doing things." Embracing the private sector Regarding what happens with NASA's CCDev program in the future, Alexander said Blue Origin intends to go forward with or without the space agency. "The work we've done with their commercial crew office has helped us to accelerate plans that we had…but we're not just doing it for NASA," he said. "If we don't continue on the commercial crew program, it's not like we're going to stop the work. We're going to continue the effort." NASA embracing the commercial sector so that private firms can move into low-Earth orbit is the right approach, Alexander said, a step that propels the space agency farther into deep space. "The burden now is less on NASA and more on the private sector to deliver," Alexander added. Blue Origin makes use of its own spaceport located about 25 miles north of Van Horn, Texas. Over the years, test flights of Blue Origin hardware from the spaceport have seen both success and at least one publicly announced crash in 2011. "We always expected losing a test vehicle at some point," Alexander said. "We'd like more tests than fewer tests. But in the end, it is rocket science. It's hard and you expect that." Blue Origin's New Shepard system is being pursued to provide frequent opportunities for researchers to fly experiments into suborbital space. Research experiments can take sensor readings of space, the sky and the Earth, and will experience microgravity environments for three or more minutes. Alexander said that he thought the suborbital market is real, but the question is how large is it going to be. "If spaceflight were ubiquitous there would be tons of uses for it…tons of science being done," Alexander said. Developing the capability to be responsive, cost-effective and to fit into business cycles of research firms is essential, he said. "As long as we are at least focused on that…we've got a good shot at doing it," Alexander said. "I think those markets are real. The question is, are they enough to sustain a business on their own…or are they going to be a side activity for human spaceflight, tourism, adventure experiences?" Leonard David has been reporting on the space industry for more than five decades. He is a winner of last year's National Space Club Press Award and a past editor-in-chief of the National Space Society's Ad Astra and Space World magazines. He has written for SPACE.com since 1999.
- Current Issue - Buyer's guide Improvements Continue To Drive GPR Applications Issues resulting from accidental damage to underground utilities continue to receive priority in the underground construction industry as initiatives to protect buried facilities gain momentum. Data from the Common Ground Alliance document indicate that the primary cause of utility strikes is failure to accurately locate and mark buried pipe and cable. Electronic locators remain the basic tool to find underground facilities, but they have limitations. Hand held electronic receivers detect buried facilities via a signal. Generally, plastic pipes cannot be found unless they are equipped with tracer wire to carry a signal generated by the locator's separate transmitter unit. Communications cable also must be energized for the locator to detect it. Signals from nearby power sources often can interfere with the locator's accuracy. Ground Penetrating Radar (GPR) offers an alternative. GPR locators send a signal through the soil which bounces off objects and returns a signal to the unit's receiver providing vertical and horizontal positions with a display of the object on the unit's screen. Tracer wire is unnecessary. Most models are on wheels attached to handlebars similar to a lawn mower, and the operator pushes the device across the ground to locate and view a representation of what's below. However, GPR also has limits, the most important being that its performance is much less effective in dense soils. Representatives of key suppliers of GPR locators and one firm that offers geophysical services that include the use of proprietary GPR locating equipment, discussed information about GPR locating technology with Underground Construction. Mala USA, Vincent Ferrara, Senior Account Manager: The primary advantage of GPR is that it detects not only metal pipes but all types of material. A tracer wire is not needed to direct connect to any utility. Sometimes GPR is the only solution for finding nonmetallic utilities without a tracer wire in dry, sandy soil. By pushing the GPR system across the surface of the soil, the unit provides a readout of the subsurface. On the GPR screen, the operator sees a hyperbolic indication of the buried utility with accurate horizontal and vertical positioning. The closest and simplest analogy is an image similar to that of a fish finder. The latest GPR systems for utility locating are more compact, easier to use and affordable than anything previously produced.