text
stringlengths 181
622k
|
---|
The Benefits of Shopping for Cheaper Utility Companies
Almost every property owner or renter would love to reduce their energy bills at the property they reside in. While there are many things people can do to cut costs with energy conservation, prices for utilities always seem too expensive with few real areas of savings. The good news is things have changed in the last few decades in certain areas of utilities. In fact, now in some states, residential customers have been able to find cheap electricity with no deposit because of changes in the utility marketplace.
Having electricity for a home or rental unit is a need for everyone. In years past, many areas of the country only gave customers one choice for their public utility electrical company. However, due to deregulation changes, consumers now have a greater variety of choices when they are seeking a source for their electrical company for residential and commercial use.
Understanding Utility Deregulation
Deregulation is a process by which the government makes companies that are known to have a monopoly on a specific business or utility, change the way they do business. Deregulation forces conglomerates to share the industry with smaller or competitive companies. This is done to establish a more open free market approach which usually significantly benefits consumers.
While most Americans are familiar with deregulation of transportation and communications, fewer have been aware of the changes and trends in the energy sector as well. The Energy Policy Act of 1992 allowed for the other companies to be able to handle the distribution of wholesale electricity and natural gas to competitors. These second-tier companies then deliver it to their customers and manage their service. These changes allow consumers in states that have adapted the changes to shop around for an electricity or natural gas carrier with cheaper rates than the primary carrier in the state or region. While the primary electric company of a given area is still the source of the energy, the distribution and management of the electricity is now allowed to be managed by a host of different companies. This allows consumers a greater opportunity of choice for getting their basic electric services managed at a significantly lower rate.
The Energy Policy Act of 1992 was a federal policy that opened the deregulation process in all 50 states. However, not all fifty states have enacted these changes or completed the process for these changes in utility service. Only the states that have completed the deregulation process have multiple companies that offer service of electrical or natural gas supply. Presently, as of 2018 there are eighteen states that have a deregulated electricity market. There are another six states that are in the process of deregulation for electricity as well.
In addition, there are some states that offer deregulated public utility gas as well as electricity and others that offer only deregulated gas or electricity. There are an additional twelve states that have deregulated public utility gas services but not electric. Texas is one of the primary deregulated electric energy markets which affords excellent opportunities for electric customers to shop around for the best deal they can find. |
In our project “AUTOMATIC DOUBLE AXIS WELDING MACHINE” is beings with an introduction to welding the various components automatically. Single -pneumatic cylinder and Solenoid valve are provided. cylinder is for the forward and backward movement, it moves x and y axis is called double axis welding machine
The Double axis welding machine makes use of properly shaped MS alloy electrodes in order to apply pneumatic pressure and carry electrical current through the workpieces. Heat is generated mainly at the merging point between two sheets. This causes the material being welded to melt gradually, thereby forming a molten bath, known as the weld mass. The molten bath is held through the pressure applied by the electrode tip and the encircling solid metal. In this type of welding process, no welding rod is used. If the compressed air goes to solenoid valve to pneumatic cylinder . welding holder connected to pneumatic cylinder which actuated by solenoid valve at the time automated welded for metal.
* Small in size.
* Cost is less compared to other welding machine.
* Due to the nature of portable it can be easily handled.
* Due to portable ability it is easily handled.
- Not so effective for very hard materials.
- Feed should be given intermittently.
- Overload should be avoided.
There are many different uses of spot welding machines. Some of the areas where it finds application are:
- Automobile industry
- Automotive manufacturing
- Metal working |
Bound rate, simple mean, all products (%)
Definition: Simple mean bound rate is the unweighted average of all the lines in the tariff schedule in which bound rates have been set. Bound rates result from trade negotiations incorporated into a country's schedule of concessions and are thus enforceable.
Description: The map below shows how Bound rate, simple mean, all products (%) varies by country. The shade of the country corresponds to the magnitude of the indicator. The darker the shade, the higher the value. The country with the highest value in the world is Bangladesh, with a value of 155.53. The country with the lowest value in the world is Hong Kong SAR, China, with a value of 0.00.
Source: World Bank staff estimates using the World Integrated Trade Solution system, based on data from World Trade Organization. |
Jute is a long, soft, shiny vegetable fiber that can be spun into coarse strong threads. It is produced primarily from plants in the genus Corchorus, which was once classified with the family Tiliaceae, and more recently with Malvaceae. The Corchorus capsularis "Jute" is the name of the plant or fiber that is used to make yarn burlap, hessian or gunny cloth, bag etc.
Jute is one of the most affordable 100% natural fibers and it is second only to cotton in amount produced. It falls into the bast fiber category (fiber collected from bast, the phloem of the plant, sometimes called the "skin") along with kenaf, industrial hemp, flax (linen), ramie, etc. The industrial term for jute fiber is raw jute. The fibers are off-white to brown, and 1–4 metres (3–13 feet) long. Jute is also called the golden fiber for its color and high cash value.
Because of eco-friendly products jute most abundantly produced in Bangladesh and its’ demand is growing
day by day worldwide for the care of earth environment, we take the privilege of having serve and share the world’s environmental care. |
Tokyo, March 7: In an important step towards development of green energy, Japanese researchers have developed a new prototype of a turbine to efficiently harness energy from ocean currents. In various experiments to test its design and configuration, the turbine was found to have robust construction and it achieved efficiency comparable to that of commercial wind turbines, a study said. The turbine is especially suitable for regions regularly devastated by storms and typhoons, such as Japan, Taiwan, and the Philippines, the researchers noted. "Our design is simple, reliable, and power-efficient," said one of the researchers Katsutoshi Shirasawa, staff scientist at Okinawa Institute of Science and Technology Graduate University (OIST) in Okinawa, Japan. In the journal Renewable Energy, the OIST researchers proposed a design for a submerged marine turbine to harness the energy of the Kuroshio Current, flowing along the Japanese coast. The turbine operates in the middle layer of the current, 100 metre below the surface, where the waters flow calmly and steadily, even during strong storms. The turbine comprises a float, a counterweight, a nacelle to house electricity-generating components, and three blades. Minimising the number of components is essential for easy maintenance, low cost and a low failure rate, the researchers explained. Water is over 800 times as dense as air, and even a slow current contains energy comparable to a strong wind, making ocean currents a viable source of clean and renewable alternative to fossil fuels. The new turbine design is a hybrid of a kite and a wind turbine - an ocean current turbine is anchored to the seabed with a line and floats in the current while water rotates its three blades, the researchers said. |
Copper: The Essential Metal (Part 2)
Copper is one of the most widely used metals on the planet, and has been for more than 10,000 years. It’s history is rich and distinctive as its unique colour, and it is now indispensable in modern society.
In this infographic we explore why copper prices have increased by 4x over the course of 10 years. Major factors include lower ore grades, exploration pushed to higher risk areas of the globe, and growing Chinese demand. |
Chapter 4 - Gallipolis City Schools
Chapter 4 - Gallipolis City Schools
Business Ethics and Social
4.1 Business Ethics
A neighbor offered you $15 for picking up
her mail. Afterward, she gives you $20
and refuses to take change. She actually
gave you two twenties that were stuck
together. What would you do??? Be
What would you do if you found a copy of
a midterm exam or a diamond ring in the
restroom at a restaurant?
The Nature of Ethics
ETHICS- moral principles by which people
conduct themselves personally, socially, or
BUSINESS ETHICS- rules based on moral
principles about how businesses and
employees out to conduct themselves.
Unethical behavior by customers result in
businesses having to raise prices for
CODE OF ETHICS- set of guidelines for
maintaining ethics in the workplace.
Laws and Ethics
(Occupational Safety and
Health Administration) part of the US
Department of Labor. They set and
enforce rules for work-related health
Ethics and Good Business
A code of ethics can cover issues such as
employee behavior and environmental
Unethical business practices include, lying,
offering merchandise known to be
substandard, or treating customers or
Owner could be fined or spend time in jail.
Employees might be fired or lose his or
Ex.: Insurance companies
Suppose you own an auto-body paint
shop. To increase your profits, you charge
tope price and use the cheapest paint.
One of your customers complains about
the quality of paint, but you do not care
because she has already paid. What is one
customer, right? The is that most
businesses (especially small businesses)
rely on repeat customers. The amount
you make in profits from one unhappy
customer may not be worth the lost
business. Or would it?
Suppose you manage a small film distribution
company. You hire Jaime fresh out of business
school to run the office. You teach him how to
use the computer system, how to deal with
customers, and how the business works. You
also pay him very little, make him do all your
work, and treat him poorly. The first chance
Jaime gets, he quits and ends up being hired by
one of your competitors. You now have to retrain
a new employee to take his place. Meanwhile,
your competition no has a well trained employee,
who is much more efficient.
Conflicts of Interest
Conflict between self interest and professional
Ex: A manager of a small business hires his
sister to do some work in the firm, but she is
clearly unqualified to do the work. Giving the
position to the sister will help out the family but
will create morale problems with the other
employees. It may also damage the business if
her work does not get done. When making
business decisions, employees have an ethical
obligation to act in the best interest of the
it against the law? Does it violate
company or professional policies?
Even if everyone is doing it, how
would I feel if someone does this to
Am I sacrificing long-term benefits
for short-term gains?
ETHICAL Decision Making
Identify the ethical dilemma.
2. Discover alternative actions.
3. Decide who might be affected.
4. List the probable effects of the
5. Select the best alternative.
4.2 Social Responsibility
Social Responsibility is the duty to do what
is best for the good of society.
Businesses that follow ethical standards
value integrity and honesty in employees.
Ethics are an integral part of their
Some people believe that if a company
produces goods that benefit society, it is
fulfilling its social responsibility.
Responsibility to Customers
Customers are a business’s first responsibility.
They should offer a good, and safe product or
service at a reasonable price.
FDA (Food and Drug Administration) protects
consumers from dangerous or falsely advertised
Fair competition between businesses is necessary
for the marketplace to operate effectively.
When companies restrict competition, consumers
are affected. They have fewer choices.
When a company does not have to compete, it’s
Responsibility to Employees
Some businesses provide work experience
for people with limited job skills.
Volunteerism is another way businesses
tackle societal problems.
Workers used to have few rights.
– Equal Pay Act (1964)-men and women paid the
same for doing the same job.
– Americans with Disabilities Act
Responsibility to Society
of the biggest social issues
facing businesses today is
1970, Environmental Protection
Agency (EPA) enforces rules that
protect the environment and control
Responsibility to Creditors and
into the 21st century, a
number of corporations kept
inaccurate accounting records.
Because of this, the federal
government passed additional
legislation. The Sarbanes-Oxley Act
mandates truthful reporting and
makes the CEO more accountable for
the actions of the financial managers
of the firm. |
7 QC Tools
For Systematic Process Problem Solving & Quick Troubleshooting
(Suitable for Technicians, Line Leaders, Supervisors and Technical Operators)
The 7QC Tool is so called because it employed a systematic approach using graphical method to provide for ease in data presentation and meaningful communication. It was first promoted by Kaoru Ishikawa a renowned quality expert in Japan for use throughout Japan. The success of the 7QC tool lies in the systematic application of tools itself to help the workforce visualize the graphical communication and thereby increase their understanding of the key issues of the problem at hand.
The workforce of manufacturing is the most importance front line control of Productivity and Quality. Total Quality Control must therefore be based upon educating the whole workforce to deal with day-to-day basic quality issues that should be nibbled in the bud and prevented from arising. This can only be done with the collaboration of the workforce of manufacturing to successfully implement Total Quality Control. The 7QC Tools are the most important practical day-to-day useful techniques for:
a) Tool 1 – Selecting The Right Problem To Solve
b) Tool 2 – Know How To Collect The Factual Data
c) Tool 3 – Stratify The Data Into Meaningful Grouping
d) Tool 4 – Apply The Cause & Effect Analysis To Find The Most Likely Cause
e) Tool 5 – Know How To Verify Whether The Recommended Solution Is The Right One
f) Tool 6 – Continue To Monitor The Process To Check For Shift In Operational Process Stability
g) Tool 7 – Know How To Troubleshoot The Quality & Productivity Problem For Fast Containment Action
This training is designed with the main objective to achieve the following results:-
1. Develop the thinking ability of the operators to think systematically in adopting 7QC Tools approach in problem solving.
2. Train and educate the workforce to master each of the 7QC Tools so that they understand their purpose and know how to apply them in problem solving.
3. Equip the workforce with the technical know-how of problem solving and how each step of the problem solving in related to the 7QC Tools.
4. Develop the workforce understanding on symptoms of the problem and root cause and different in approach in addressing them.
5. Provide a quick troubleshooting skill to help the workforce identify the location of the problem.
a) The master trainer will provide full step-by-step instruction on the 7QC Methodology as well as identification of useful charting method and full instruction on how to make each type of chart,
b) Illustration of chart construction method and chart use will be provided.
c) The workforce are given opportunity for hands-on the construction of the charts themselves for each step of the 7QC Tools process.
d) The workforce will be quipped with 7QC Approach as well as Quick Troubleshooting Approach to prevent the problem from causing further damage.
Why APRC 7QC Tools Stand Out
The master trainer, C.H Wong was involved in conducting Root Cause Analysis for Rolls Royce Group in Asia Pacific for many years. There are also many tools in 7QC which he thoroughly understood their application since he used them while working in Industrial Engineering and Manufacturing Engineering in General Electric Company. Therefore the participants will benefit from his wealth of experience in using these tools as well as involvement in conducting Root Cause Analysis for Rolls Royce Company. That is why, Tool 7, includes a very important component called Decision Tree, that is for Quick Troubleshooting. It is widely used in Rolls Royce’s approach.
Fundamental – Quality Problem Solving Approach Using 7QC Tools
Tool 1 – Pareto Analysis For Selecting The Right Problem To Solve
Tool 2 – Collection Of Factual Data
Tool 3 – Stratify The Data Into Meaningful Grouping & Interpret Their Distribution
Tool 4 – Apply The Cause & Effect Analysis To Search & Locate The Likely Root Cause
Tool 5 – Construct The Scatter Diagram To Verify That The Recommended Solution Is The Right One
Tool 6 – Monitor The Process By Setting Up Control Charts With Upper Control Limits & Lower Control Limits
Tool 7 – Decision Tree Diagramming Method For Effective Troubleshooting & Locating The Main Problem Point
For more information on in-house training provided by APRC, please contact the Administrator at: [email protected] |
Here is an essay-mid-term paper on organizational communication
An Essay on Organizational Communication. By Ernie Sanchez.
Organizational Communication Leon Estep.
In this essay I would first like to address how communication deals with the changing world of work. Communication is the transfer and understanding of meaning. It involves the process of gathering, processing and distributing information, which not only touches but is also a vital activity in any place of work and all of the organization’s functions. Communication is a social process in functioning of any group, organization or community. It influences the decisions of the individual and later the decisions of said organization. Organization is defined as a stable system or structure of individuals who work together to achieve, through a hierarchy of ranks, common goals. This structure influences the way we communicate in terms of the method amount of information it channels. The reason for studying and understanding organizational communication is that it is highly structured. Through theses means of communication the individuals understand their roles and functions in said organization. This behavior in application with in the organization also effects how the organization reacts to and with the outside world. Let us say you were asleep and woke up to a work day in say, 1960. How different is your work life today, compared to what it was forty years ago? There would clearly be no Starbucks in every corner or cell phones in every pocket. In today’s world, the structure, content, and process of work have changed. Work today is now more cognitively complex; more team-based and collaborative, more dependent on social skills. It involves more dependency on technological competence, and more time pressured, along with more mobile and less dependent on geography. In the world today you will also be working for an organization that is likely be very different due to the fact of its competitive pressures and technological breakthroughs. Today’s organizations are far more leaner and agile, more focused on identifying value from costumer perspective and more tuned to dynamic competitive requirements and strategies. It will have less hierarchical structure and decision authority along with less job security and less likely to provide life long careers and continually reorganizing to maintain or gain competitive advantage. The changing workplace is driven by organizational issues such as Cognitive competence, social and interactive competence, the new “physiological contract” between employees and employers, and the changes in process and place. In cognitive competence; cognitive workers are expected to be more functionally and cognitively fluid and able to work across many kinds of tasks and situations. The broader span of work, brought by changes in the organizational structure, and also creates new demands. In a 2001 report from the National Research Council has called attention to the importance of relational and interactive aspects of work. As collaboration and collective activity become more prevalent, workers need well developed social skills; what the report calls “emotional labor’. The physhcological contract: as work changes, so does the nature of the relationship between employees and employers. In the new work context, the informal so called “physhcological contract” between workers and employers-what each one expects of the other-focuses on competence development, continuous training and work/life balance. In...
Please join StudyMode to read the full document |
Wind power generates record amount of electricity in the UK
Windfarms generated a record amount of electricity on Wednesday (28 November), according to National Grid. Wind power generated 32.2% of Britain’s electricity, ahead of gas which provided 23.5%
The official figures from the grid show that Britain’s onshore and offshore windfarms hit a new high of 14.9 gigawatts (GW) between 6.00pm and 6.30pm on Wednesday evening.
(image courtesy of edie & Utility Week)
Analysis conducted by Drax Electric Insights shows that this equated to 33 per cent of Britain’s electricity needs at a time of high demand. This beat the previous record of 14.5GW set on 9 November.
Nuclear supplied 17.9%, coal 8.7%, biomass 8%, imports 7.8% and hydro 1.7%.
On Thursday National Grid said wind generated 32% of Britain’s electricity followed by gas at 25%, while nuclear supplied 18.1%, coal 9.1%, biomass 7.1%, imports 5.9%, and hydro 2.0%.
Emma Pinchbeck, executive director of Renewable UK, said: “It’s great to see British wind power setting new records at one of the coldest, darkest, wettest times of the year, providing clean energy for people as they came home, switched everything on, turned up the power and cooked dinner. As well as tackling climate change, wind is good for everyone who has to pay an electricity bill, as the cost of new offshore wind has fallen spectacularly so it’s now cheaper than new gas and nuclear projects, and onshore wind is the cheapest power source of all.”
Chemical Corporation (UK) Ltd offers Mobil SHC Gear 320 WT - Wind Turbine Gearbox Oil, a product with second to none performance in rugged and extreme conditions which would be optimally combined with Mobil SHC Grease 102 WT.
Contact our Operations Director Steve Stewart by email at [email protected] or call 02920 880222 now to find out how our mobile gearbox oil exchange systems can provide you with huge cost savings for wind turbine gearbox oil changes or for any queries relating to Mobil Wind Turbine lubricant solutions. |
27 December 2018
Although the terms 'sponge rubber' and 'foam rubber' are often used interchangeably, there are actual physical differences between both materials that not everyone knows about. Think of it like this - although your foam mattress and the sponge you do the dishes with are made of similar materials, it's quite obvious that they are designed to serve different purposes.
This differentiation between the two materials is very important to technical buyers – after all, materials are chosen based not only on their functionality but also on their safety. For example, in the mass transit industry, some silicone foams meet FST standards, while the majority of carbon black foams fall short.
How is foam rubber made?
Foam rubber uses a gas or chemical known as a ‘blowing agent’ to produce gas, which creates multiple small bubbles inside a liquid mixture. This mixture usually consists of:
- Flame retardants
Polyols and polyisocyanates are both types of liquid polymer – when combined with water, they produce an exothermic (heat-generating) reaction. By combining different types of liquid polymer, a foam rubber manufacturer can create either flexible or rigid foam rubbers.
This process is known as polymerisation and is the point at which molecules from the polyols and polyisocyanates crosslink, forming 3D structures. This takes place in a machine called a compounder, which can control the level of foaming by adjusting the volume of surfactants and water in the mixture.
How is sponge rubber made?
Sponge rubber is manufactured in a similar way to foam rubber, except it is designed to come in two distinct densities: open cell, and closed cell.
In closed cell sponge rubber, the holes within the material are – as the name suggests – closed off from each other, which creates a dense material full of tiny vacuums. Open cell sponge rubber contains many open holes that allow air to fill the material. This makes it a far less dense material that is cheaper to manufacture as it comprises less material and more air per square metre.
To produce open cell sponge rubber, ingredients are mixed in a heated mould, before sodium bicarbonate is added. The uncured sponge then rises like a cake, with the baking soda creating a network of tiny, interconnected bubbles.
Conversely, manufacturing closed cell rubber involves using a chemical powder that decomposes under heat. Pressure is added to the mixture, and nitrogen gas is produced, which helps give closed cell sponge rubber its density and strength.
Foam rubber and sponge rubber from Aquaseal
Aquaseal manufactures our sponge rubber (both open and closed cell) and foam rubber from synthetic and natural rubbers. Our products are used for applications requiring pads, shapes, coils, and gaskets, as well as mouse mat bases and simple strips of material.
Do you need a bespoke foam rubber or sponge rubber solution? Get in touch with the Aquaseal team today on 0191 266 0934. |
CNC machining defines several processing methods in industrial production in which CNC machines are used to turn the work piece into the desired shape by mechanically removing excess material (chips). Typical machining methods that fall under the category of CNC machining are CNC turning, CNC milling, CNC grinding and CNC drilling. Generally, the materials to be processed are metals, but other materials such as wood and plastics can be processed by means of cutting. The companies operating in this area typically produce turned and milled parts in small, medium and large series (contract machining). Sample parts (individual pieces) can also be manufactured. |
Automatic assembly machine for car radiators
This stacking system for heat exchangers uses robots to bring two different sheets together.
- Task: Stacking two different sheets on top of each other.
- Solution: The sheets are first separated, then fed into the stacking system, while within the plant, the sheets are stacked alternately on top of each other by five robots. If the stack is full, the end is formed with a lid and the stack is removed from the machine to then be inserted into the press by an employee.
- Result: This enables the production of finned heat exchangers to be automated with an expedited cycle time of just 0.25 seconds per part. It also paves the way for automatic detection of defects during the manufacturing process thanks to a powerful camera system. |
Advertising or advertising in business is a form of marketing communication used to encourage, persuade, or manipulate an audience (viewers, readers or listeners; sometimes a specific group) to take or continue to take some action. Most commonly, the desired result is to drive consumer behavior with respect to a commercial offering, although political and ideological advertising is also common. This type of work belongs to a category called effective labor.
In Latin, ad vertere means “to turn toward”.The purpose of advertising may also be to reassure employees or shareholders that a company is viable or successful. Advertising messages are usually paid for by sponsors and viewed via various old media; including mass media such as newspaper, magazines, television advertisement, radio advertisement, outdoor advertising or direct mail; or new media such as blogs, websites or text messages. |
Trucks hauling mounds of sand into the southern Minnesota town of Winona for delivery to drilling sites across the nation's shale regions are not spewing dangerous dust emissions into the air, preliminary data shows.
This data was released early this month, from a monitor for crystalline silica dust, or frac sand, a known trigger of lung disease. The instrument was placed along Winona's busy truck route at the start of the year in response to local concern.
Dust reached detectable levels only two out of the 38 days measured during the last seven months, according to air regulators at the Minnesota Pollution Control Agency (MPCA).
Even when the dust was detected, once in June and another time in August, the levels were very low, according to Jeff Hedman, an MPCA engineer involved in the study. "We're happy to see it," he said.
Many Winonans, including planning commissioner Ken Fritz, are relieved by the data. "I certainly think there is enough evidence out there that [shows] silica sand can create health problems in certain environments...in this case, basic information doesn't indicate any problems," he said.
The silica monitor sits atop a two-story YMCA building downtown and was set up to capture emissions from frac trucks. But Winona's potential dust emissions can come from more than just trucks hauling sand, there are also mining and processing facilities in town. Some citizens who live near those facilities have asked, "How do you know that you are meeting the [air] standard in my backyard?" said MPCA air monitoring unit supervisor Rick Strassman.
According to the MPCA, this data is just the starting point for understanding the risks posed to the region's air by the growing silica sand industry.
Minnesota is the nation's fourth leading producer of pure silica sand, according to the U.S. Geological Survey. The nation's top producer, Wisconsin, is just next door. Both states host vast silica reserves and have expanded their developing of the sand in recent years to keep up with growing demand from energy companies.
Active silica sand facilities between the two states have ballooned from less than 20 in 2010 to over 100 today.
Winona is ground zero for Minnesota's frac sand boom. It has at least six active sand mining, processing and transport facilities, the highest density in the state. About 100 trucks arrive daily from in and out of state. All that sand is then shipped by train or barge to frack sites across the country. Operators blast the hard, round sand down wells to break and hold open cracks in the bedrock to extract oil and gas reserves. It can take up to 10,000 tons of sand to frack a single well during its lifetime.
Scientific studies have detailed the effects of silica dust—particles that are small enough to enter lung tissue and the blood stream and trigger the lung disease silicosis—on workers handling the material, but not on neighboring communities. Winona's government-funded monitor, the first in the state not paid for by industry, was added to chip away at that scientific gap.
The data recently released comes from a monitor targeting concentrations of particulate matter up to 4 micrometers in diameter, called PM 4. These specks of dust are 20 times smaller than beach sand. The dust accumulates in the monitor's filter. Samples are collected over a 24-hour period every six days and then sent to a New York lab for what's called "speciation" analysis to calculate how much of the collected material is frac sand. Afterward, the processed data is sent to back to Minnesota, where it is checked by state regulators and published online.
So far, over seven months of data have been processed. Most observed days, levels were below the detectable amount. When dust was detected, it was less than 0.5 micrograms per cubic meter. The chronic health benchmark used by the MPCA is six times higher—3 micrograms per cubic meter.
The monitor will continue collecting data through the year's end.
A second Winona monitor atop the YMCA measures even smaller particles in the air from all sources in town, not just frac sand. The data it collects requires minimal processing and has been available for months.
"The silica data is really good news," said Crispin Pierce, a University of Wisconsin-Eau Claire professor of environmental public health. He has studied silica emissions across Wisconsin and Minnesota.
Still, Pierce said, it's important not to jump to conclusions and "to measure the whole year through to see any trends."
For some residents, including Jane Cowgill, the monitor's results are promising but don't provide the full picture for Winona. Cowgill, co-founder of the local grassroots activist group Citizens Against Silica Mining, points out that there are a handful of frac sand mine and processing sites on the outskirts of town.
"If you are living at the [air monitor] building, you are probably OK," she said. But if you are near these other sites, she noted, there is anecdotal evidence of a problem—"cars covered with dust, furnaces all clogged up, and reports of respiratory problems."
The only way to fully understand Winona's frac sand air risk is to set up monitors at the edges, or fences, of these other facilities, she said. She calls the strategy "fenceline monitoring" and is part of a group lobbying the City Council to approve more monitoring.
Ken Fritz is one of the city officials who voted down a proposal this summer to approve additional monitors. He wanted to see the current monitors' data before making the decision. Now that the data is available, he's still convinced no further monitors are needed.
Minnesota regulators said the active monitors aren't sufficient to capture the entire town's silica sand air risk. Additional "monitoring is still warranted, but that's really the city's call," said MPCA's Strassman. |
1301.0 - Year Book Australia, 2005
Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 21/01/2005
|Page tools: Print Page RSS Search this Product|
The construction industry plays an important role in the Australian economy. Construction provides homes, places for people to work, and recreation facilities. It provides essential facilities and infrastructure such as schools, hospitals, roads, water and electricity supply and telecommunications.
Both the private and public sectors undertake construction activity within Australia. The private sector operates in all three areas of activity, with a major role in residential and non-residential building activity. The public sector has a major role in initiating and undertaking engineering construction. In addition it has a role in non-residential building activity, in particular for the health and education industries, building hospitals and schools.
The chapter includes an article Australian home size is growing. |
Fraught with uncertainty
Calculating the costs of generating electricity has never been as uncertain as today. This is the result of a number of factors, including the liberalisation of energy markets, the fast pace of technological development, the volatility of fuel prices and questions about climate policy.
|Nuclear energy is the most competitive option when financing costs are low|
The IEA-NEA study notes that ‘no technology holds a consistent economic advantage at a global level under all circumstances’. Costs depend strongly on local conditions and on variable factors such as the cost of capital and the price of carbon. The researchers have nevertheless drawn some general conclusions. When financing costs are low (assuming an interest rate of 5%), nuclear energy, which requires high capital investment, is the most competitive option, followed by coal-fired power plants. With higher financing costs (10% interest rate), coal-fired electricity is the cheapest option. The costs of gas-fired power are highly dependent on gas prices and much less on financing costs. The attractiveness of onshore wind power depends very much on local circumstances. It is significantly cheaper in North America than in Europe. Offshore wind power is ‘currently not competitive’, the report notes. Neither is solar power.
|LCOE = levellised cost of electricity generation|
The accompanying graphs show the generating costs of nuclear, coal, gas-fired, and onshore wind power at a 5% and 10% discount rate. It should be noted that these results incorporate a carbon price of $30 per ton. They do not include transmission and distribution costs nor the additional costs of balancing and backup power that wind power requires. The study assumes a gas price of $10.30/MMBtu in Europe, which is roughly twice as high as current spot prices. The costs of nuclear power do include the costs for refurbishment, waste treatment and decommissioning after a 60-year lifetime.
Lack of experience
The report notes that the ‘precise cost competitiveness’ of the four technologies ‘depends more than anything on the local characteristics of each particular market and their associated cost of financing, as well as on CO2 and fossil fuel prices’. Each of the technologies has ‘potentially decisive strengths and weaknesses’.
About onshore wind the report says that ‘it is currently closing its still existing but diminishing competitiveness gap’. However, many European countries have ambitious plans to develop offshore wind. About this, the IEA says that ‘offshore wind is currently not competitive with conventional thermal or nuclear baseload generation’.
When we dig further into the data underlying the conclusions, some other points seem worthy of notice. All cost estimates are based on existing or commissioned power plants; they are not estimates of future costs. In this context, the researchers note that when it comes to nuclear power, there is a ‘lack of recent construction experience in many OECD countries’. This implies that the costs of newly to be built nuclear power stations could come out much higher – or much lower. Most recent experience comes from South Korea, where costs for nuclear power plants turn out to be relatively low.
For offshore wind, costs of actual or commissioned parks range from $101 per MWh (for a project in the US) to $260 per MWh (for a project in Belgium). The most valuable data in this sector could no doubt come from Denmark, which has the most experience building offshore wind parks. Unfortunately, as the report notes, Denmark did not supply any data.
Costs of solar PV vary from $225 to $600 per MWh. The two solar thermal plants included in the report have costs between $136 and $243 per MWh.
Although the IEA-NEA report is indispensable as a benchmark study for the electricity sector, the authors note that producers and investors will always face large uncertainties when making specific investment decisions. In fact, as the authors note, since the publication of the report series started, in 1983, the market has never faced the ‘degree of uncertainty’ that it faces now. ‘In the medium term, investing in power markets will be fraught with uncertainty’, they write.
They give five reasons for this enormous uncertainty in the market. First, ‘the widespread privatisation of utilities and the liberalisation of power markets’ has reduced ‘access to data’.
Second, ‘policy factors’ have ‘rarely created more uncertainty for the cost of different power generation technologies than today’. Those ‘policy factors’ include carbon pricing, liberalisation and ‘re-regulation’ of markets, security of supply concerns for gas, the technical and regulatory uncertainties surrounding carbon capture and storage (CCS), feed-in tariffs ‘of limited duration’ for renewables, and regulatory uncertainty with regard to nuclear power.
Third, ‘after two decades of relative stability, the power sector abounds … with new technological developments’.
Fourth, relatively little new construction of power plants has taken place in OECD countries in recent years, making recent data harder to come by. Most newly built power plants have been combined-cycle gas turbines and onshore wind parks.
And the fifth source of uncertainty is ‘the rapid changes in all power plant costs’ in the last five years, e.g. as a result of ‘unprecedented inflation’ of construction costs and extreme volatility of fuel prices.
For governmental policymakers, the most important lesson of the study is that ‘choices exist’. The authors note that ‘governments play a key role when it comes to the costs of raising financial capital and the price of carbon’. Capital cost is a function of perceived risk – and ‘smart government action can do much to reduce risks’. Raising (or lowering) the carbon price would also make a huge difference – it would ‘decisively tilt the current competitive balance in one direction or another’. |
New Energy and Industrial Technology Development Organization (NEDO), etc announced that it has started the construction of facilities for testing a large-scale hydrogen system using electricity from a 10MW solar power plant and producing hydrogen in Namie-machi, Fukushima Prefecture, Japan.
Fukushima Hydrogen Energy Research Field (FH2R), which is participated in by NEDO, Toshiba Energy Systems & Solutions Corp, Tohoku-Electric Power Co Inc and Iwatani Corp, is aimed at establishing a hydrogen energy system equipped with 10MW-class hydrogen production facilities.
The facilities will produce, store and supply up to about 900t of hydrogen per year by using the 10MW-class hydrogen production facilities. Power will be supplied from an adjacent 10MW-class solar power plant in addition to a power grid when there is a shortage of power due to bad weather, etc.
The water electrolysis system employed for the plant is a product of Asahi Kasei Corp (maximum water electrolysis power: 10MW, maximum amount of hydrogen produced: 2,000Nm3/h). The supplier of the solar power generation system has not been decided yet. The scale of the solar power generation facilities is scheduled to be expanded to about 20MW in the future.
The production and storage of hydrogen will be conducted based on the market demand for hydrogen estimated by a hydrogen demand prediction system. By adjusting the amount of hydrogen produced, the supply-demand balance of the power grid will be controlled.
The four organizations will test optimal operation control technologies combining the demand response of the power grid and the adjustment of hydrogen supply/demand to deal with (2) devices with different operating cycles and (2) demands with different input timings, periods and amounts.
The project is entrusted by NEDO. Toshiba Energy Systems & Solutions coordinates the entire project and is responsible for the entire hydrogen energy system. Tohoku-Electric Power is responsible for the system to control the power grid and other technologies related to the grid. Iwatani is responsible for the system to predict the demand for hydrogen and hydrogen storage/supply.
Hydrogen produced is carried by using a compressed hydrogen trailer and expected to be used for (1) power generation using fuel cells, (2) vehicles such as fuel-cell vehicles and fuel-cell buses and (3) manufacturing plants (as fuel). The facilities are scheduled to be completed and start trial operation by October 2019, and the verification and the transportation of hydrogen are scheduled to begin for checking technical issues by July 2020 |
What is a logo?
A logo is often defined as a recognisable and distinctive, graphic, emblem or unique symbol that identifies your company to the general public and
your specific target market. It is included throughout all communications, paperwork, marketing materials etc. They can include just a graphic, just a name or both.
Logos have often been dictated by the technologies available to create them, for example, when the only method of creating a text based logo was using hot metal typesetting, the designs were very limited. The spurs on logo evolution, as these technologies develop and creating higher quality designs that are easier to mass produce, the logo often adapts to suit.
Here we have 19 of the world's most famous logos through the years, showing how they have changed and adapted to technological, design and social trends and changes:
Pepsi was introduced to the world in 1893 with a logo that was written in a red script font on a plain white background. This has changed over the years, first becoming more detailed and elaborate, sticking with the red and white, until they added the blue to the logo in 1950. From the 1960s they continued to make the logo bolder with black text , using the shape of a bottle top. Their current logo incorporates a simpler version of the bottle top with simple curved white text on a blue background.
Starting out around 1900, Shell has grown into the worlds largest corporation as of 2016. Their first logo was a literal inked clamshell in black and white (which was extremely easy to reproduce). From this, the logo stayed black and white, although the design changed slightly, up until 1948. In 1948 they introduced their first colour logo of red and yellow, keeping the same graphic and adding their name to the logo. This gradually became a more simplistic design from 1971 to today to create the 2d flat logo we know now.
IBM is the result of two companies merging, (International Time Recording Company and Computing Scale Company) this created their first logo in 1888 which combined the letters I, T, R, C. This became more elaborate through the font used up until 1924, at which point a name change meant the logo replicated the world with the name, 'Business International Business Machines'. In 1947 this changed again to simply become plain outline of the IBM letters. This evolved in 1971 into the blue horizontal striped logo we know today.
Mercedes introduced their cars to the world in 1902 with a simple text logo in an oval shape. In 1909 this changed to become the familiar 3 point start. Mercedes played around with the inclusion of the oval and circular shape around the star and their name from 1909 to today, making only slight changes to the colours used.
Volkswagen first introduced themselves in 1930 with a black and white logo that used the classic VW combination surrounded by a more detailed design. This was simplified in 1939, and simplified again in 1967 when they first introduced the blue to their logo. In 1978, they switched from blue on a white background to white on a blue background which has evolved into the logo we know today.
Renaults logo was initially an emblem with the founders initials on in 1900 with an elaborate design, typical of the styles at the time. This became a more literal logo as they changed it completely to an encircled car in 1906. In 1919 this was changed again to an encircled tank. It wasn't until 1926 that the classic diamond shape was introduced with the name of the company through the middle. In 1946 they introduced yellow to an otherwise black and white logo. This was changed a number of times, both including then excluding the colour and text, until they created the logo we know which was created in 2007.
Founded in 1899, Fiat have changed their logo dramatically, starting out with a brown parchment that included details of their company. In 1901, they introduced their first logo which was the Fiat name on a blue background with an elaborate surrounding design. From then, the outer shape changed, and the blue was swapped for red in 1921. in 1931, the logo became square until 1999 when the blue was reintroduced. In 2006 this was changed back to red and resembles the logo we know today. One element of their logo has been the same throughout which is the unique shape of the letter 'A'.
Founded in 1976, the first apple logo featured an illustrated black and white scene of Sir Isaac Newton discovering gravity under an apple tree. In 1976, this was changed to the recognised, simplistic apple shaped with rainbow horizontal stripes. This was further simplified to a black 2d apple shape, then becoming the iconic monochrome apple in 2000.
Founded in 1964, Nike didn't introduce a logo until 1971 which featured the infamous 'tick' shape through a lower-case, curvy font stating the name of the company. This was rearranged and changed to upper-case in 1978. In 1985, Nike changed their blue and white logo to red on white, finally changing this again in 1995 to the simple 'swoosh' using white on a black background.
Coca Cola is a recognised brand, famous for its logo design, however when the company first started in 1886, the logo was a simple, upper-case black and white font. From the early 1900s they introduced the now synonymous Script font, still in black and white. The red and white wasn't introduced until the 1950s. In the 1990s the logo saw its biggest change in years with the introduction of the bottle of coke in the logo, this was removed in the early 2000s to become the logo we now know.
Lego first became available to buy in 1949 which a simple black logo with a standardised font. In 1946, they first introduced red and yellow to their design. In 1955, the font was changed to become a less formal, circular based font, similar to the one used today. Between 1955 and now, the design has only seen a change in colours, from dull, to bright red and yellow.
Founded in 1966, Mastercard first used a black and white logo under their name Interbank. In 1969, they introduced the overlapping red and yellow circles. Since then they have adapted their font from a lower-case, informal font, to the sentence-case, formal font we know today.
Founded in 1940, McDonald's first logo was a simple script font. It wasn't until 1946 that the logo became circular and introduced an illustration. In 1962, they introduced colour and simplified the logo to the classic 'M'. This has since been adapted to include their full name, slogan and various shades of the red and yellow, to arrive at the golden 'M' of today.
Founded in Japan in 1937, Canon's first logo was a scene of the Buddhist Goddess of Merci on a lotus flower with the Japanese name. This was change to the name only in 1934. In 1935, the logo became 'Canon' and has since only been adapted through he use of different fonts.
Founded in 1930, the logo has always included the infamous illustration of Colonel Sanders, who founded the company. The logo first started out as black and white text, which didn't introduce the famous red colour until 1991. The shape of the logo went from square to circle in the 90s to become the logo we know now.
Founded in 1971, the Starbucks logo was black and white with the classic mermaid in the centre. They first introduced colour (blue) to their logo in 1987, changing this to green in 1992. The design has since had the text removed to become the mermaid symbol we recognise today.
Walmart have always included (in a simplistic font) the words Walmart. The major changes in their logo's evolution is the use of colour which has been blue, black and white, brown, and blue again.
Founded in 1998, Google have always used the multiple colours that they use today, however their font has changed from a more bulky serif font with a drop shadow to the thin, stylised, flat version we know see today.
Founded in 1991, Vodafone's first logo was a simplistic black and white, upper-case font stating their name. In 1997, they first introduced colour (red), keeping the font the same. In 1997, they added the classic Vodafone 'O' shape on a red background, which was removed in 2006, leaving the 'O' and the name underneath. |
I believe that I’m being discriminated against at my workplace. What are my rights?
Your Rights Under The Fair Work Act
The Federal Government’s Fair Work Act protects against discrimination for employees covered by the federal system. One way that the Fair Work Act does this is through its “general protections provisions”.
Section 351 of the Fair Work Act states that an employer must not take “adverse action” against an employee, or prospective employee, because of any of the following:
- race or colour;
- sexual orientation;
- physical or mental disability;
- marital status;
- family or carer’s responsibilities;
- political opinion ;
- national extraction; or
- social origin.
“Adverse action” by an employer can include any of the following:
- terminating the employee;
- withholding an employee’s legal entitlements (such as annual leave pay);
- injuring the employee;
- demoting an employee;
- refusing to hire a prospective employee; and
- treating an employee differently to other employees.
A Claim to The Australian Human Rights Commission
The Australian Human Rights Commission can take action against discrimination under the following laws:
- the Australian Human Rights Commission Act 1986 (Cth);
- the Racial Discrimination Act 1975 (Cth);
- the Sex Discrimination Act 1984 (Cth);
- the Disability Discrimination Act 1992 (Cth); and
- the Age Discrimination Act 2004 (Cth).
Under these laws, the Australian Human Rights Commission can investigate complaints of discrimination based on any of the following:
- disability (both physical and mental);
- criminal record (in employment context only);
- trade union activity (in employment context only);
- political opinion (in employment context only);
- religion (in employment context only); and
- social origin (in employment context only).
A complaint to the local state or territory anti-discrimination body
There are also State and Territory laws against discrimination. If you have been discriminated against and there was a breach of an applicable State or Territory law, then you may be able to make a complaint to your local State and Territory anti-discrimination body.
When can I complain about bullying and harassment?
Under the Fair Work Act, it is an employee’s “workplace right” to be able to make a complaint about their employment. Therefore, if you complain about bullying or harassment in the workplace, your employer cannot take “adverse action” against you as a result.
As mentioned earlier, “adverse action” includes terminating an employee, as well as many other actions. It doesn’t matter if bullying or harassment didn’t actually occur in the first place. It is your workplace right to make a complaint about your employment.
If you made a complaint about bullying or harassment in your workplace and you were terminated or disadvantaged in some way as a result, you may be entitled to lodge a general protections claim.
What is “bullying” under the law?
Under the Fair Work Act, a worker is “bullied at work” when:
- an individual, or a group of individuals, repeatedly behaves unreasonably towards the worker, or a group of workers of which the first said worker is a member; and
- the behaviour creates a risk to health and safety.
“Reasonable management action carried out in a reasonable manner”, however, is not bullying under the Fair Work Act.
Can I take legal proceedings against bullying?
If you are covered by the federal system, you can make a formal application to the Fair Work Commission for an “order to stop bullying”.
What Are My Rights If I’ve Been Sexually Harassed?
You may be able to lodge a complaint to the Human Rights Commission. Visit sexual harassment page for more information.
Contact us for a free discussion if you believe you have been discriminated, bullied or harassed. |
In the previous two posts I discussed why leaders feel helpless about the improving the innovation culture. I also explained what the culture is and how it can be shaped (cultured) by using processes over a very long time. Many readers asked for examples of the stumbling blocks in innovation culture and how processes could help in overcoming them. Many such examples have been explained in the book that I wrote, but I am summarizing two examples in this post and another two in my next post to complete the discussion on innovation culture.
Think of the life-cycle of a seed that germinates and grows into a flowering tree or consider a new born baby that grows into an adult. Both, the seed and the baby, need different types of nourishment and environment in different stages of their growth. Similarly, cultural elements that nurture innovations vary from the time when an idea takes birth in the mind of an individual all the way to the stage when the organization gets the return from the innovation.
Let me take one of the early stages of the innovation life cycle (ideation) and explain the importance of a few cultural elements.
An idea takes birth in the mind of an individual. The organization thereafter needs to invest in the idea to get to the innovation. In one of my previous posts, I explained that innovations are always top down. Since every employee is at the top of his team (the lowest person in the hierarchy is in control of one’s own action), everyone in the organization can invest in ideas. This investment could be in terms of human or financial resources. The originator of the idea can make this investment only if the idea falls clearly in the area under his/her control/influence. This is rarely true as typically the implementation of ideas need involvement of several others beyond the control/influence of the ideator. Therefore, the first step for the ideator is to share the idea with others. This first step of sharing idea, which seems to be straight forward, needs the support of quite a few cultural elements.
There are several forces that stop or discourage a person from sharing his/her idea. I call these drags and leaders often refer to them as culture! Here are a few drags:
- Drag #1: Will I be called stupid?
- Drag #2: Do I want to work on my idea?
- Drag #3: What happened last time when I gave an idea?
- Drag #4: Is my company willing and capable of executing my idea?
Drag # 1: Will I be called stupid?
This is almost an omnipresent concern. Employees fear of ‘getting mocked’ and being tagged as a ‘disruptive idiot’ by peers and superiors. How do you think we can shape the culture in which employees have no hesitation in sharing their ideas? What processes could be installed? In the current age of digital world this is much easier problem to address. Imagine an idea management system that allows employees to share their ideas, but the readers cannot find the name of the ideator! If this process takes away the excitement of employees, the system could allow them to create avatars of their choice!
While this process will assuage the fear of being mocked, employees will still not share all their ideas – we need to get rid of the second drag too!
Employees also have a fear of someone else stealing their idea. This fear gets developed if there are instances (and stories) of people taking credit of someone else’s idea. The above system would help in mitigating this fear and encourage employees to share and stamp their name on their ideas before someone else could (steal)!
Drag # 2: Do I want to work on my idea?
Ideas need creativity, but the implementation needs passion and impatience of a maverick and calmness, perseverance and resourcefulness of a Sherpa. Both these contrasting skills are difficult to fine in a person and rarely amongst the ideators who are creative. Therefore, if organizations have a policy of assigning the responsibility of implementing the idea to the originator of the idea, it works as a drag. The employee weighs his/her interest and capability before sharing the idea.
A simple process change that doesn’t necessitate the ideators to take on the responsibility to implement their ideas, will solve the problem. The moment an idea is selected for implementation the ideator name could be revealed and the ownership of the idea should transfer to the person who has control/influence of the area in which the idea needs to be implemented. Organization should then find the team of Sherpas and Mavericks to implement the idea. (Note: Mavericks (e.g. Everest climbers) are seldom successful without Sherpas and Sherpas have no role without mavericks)
Will the employees share all their ideas now? Yes, for a short period. Its long-term sustenance will depend on the presence of remaining two drags (Drag # 3 and 4)
I urge you to think (and send your responses) of processes that would help in reducing the effect of Drag # 3 and 4. I will share my views on these in the next post. |
late 15c., "to use to one's profit, to increase (income)," from Anglo-French emprouwer "to turn to profit" (late 13c.), from Old French en-, a causative prefix or from em-, + prou "profit," from Latin prode "advantageous" (see proud (adj.)).
Spelling with -v- was rare before 17c.; it apparently arose from confusion of -v- and -u-. Spelling otherwise deformed by influence of words in -prove. Meaning "make better, raise to a better quality or condition" first recorded 1610s. Intransitive sense "get better" is from 1727. Phrase improve the occasion retains the etymological sense. Meaning "to turn land to profit" (by clearing it, erecting buildings, etc.) was in Anglo-French (13c.) and survived or was revived in the American colonies and Australia. Hence, "make good use of, occupy (a place) and convert to some purpose." |
New ways to split atoms29 November 2018
In Canada, nuclear innovation has been happening for decades through research and development. Now, with further technology advances in traditional Candu plants and small modular reactors, the industry hopes to attract a new generation as it makes its pitch to help save the world from climate change.
THERE IS A NEW SENSE of urgency across the nuclear industry as policy makers, engineers and entrepreneurs work fervently to guide advanced and small modular reactors (SMRs) through regulatory and technical hurdles from drawing board to siting approval.
In Canada, the federal government has its own stake in SMRs. On 7 November, coinciding with a sold-out SMR nuclear conference in Ottawa, Amarjeet Sohi, Canada’s Minister of Natural Resources, welcomed the release of the Canadian Small Modular Reactor Roadmap as an “important technology opportunity for Canada, both at home and on the world stage.”
The SMR could help Canada meet its commitment to phase out coal-fired electricity by 2030, something Ontario did in 2014, largely with the help of its nuclear fleet.
The country’s major utilities, SMR technology vendors and Canada’s national lab, Canadian Nuclear Laboratories (CNL), are all pursuing SMR demonstration, commercialisation and manufacturing. New Brunswick announced C$10 million toward development of an Advanced Nuclear SMR Research Cluster, with $5 million each from two SMR vendors. The aim is to develop a path to a commercial demonstration plant with potential for a manufacturing cluster in the province that would dovetail into its economic growth plan.
The CANDU Owners Group (COG) has recognised the strong SMR interest of its Canadian members. Using the levers it already has for collaborative Candu technology development, it has created an SMR technology forum to collectively tackle technical and regulatory issues common across the technologies. It is also developing a vendor participant programme for SMRs similar to its well- established Candu supplier participant programme.
Meanwhile, Candu nuclear operators in Ontario and New Brunswick have also been developing innovative approaches to managing the country’s 19 existing reactors. These have completed, are undergoing, or are heading into mid-life refurbishment or major component replacement. Some will continue operation into the 2060s, after up to 80 years of operation, as a result.
In New Brunswick, the single unit, commissioned in 1983, returned to service after a mid-life refurbishment in 2012. This year, it hit some of its best operating performance targets in decades.
Also this year, one of Ontario’s Darlington units began refurbishment, which has been proceeding on time and on budget. Ontario’s 18 units provide about 60% of the province’s power.
In recent years, plants have benefitted from advanced technology breakthroughs like artificial intelligence (AI). Their use of AI includes embedded sensors to monitor plant and equipment condition, and automated machinery and robots that work in high radiation areas while workers oversee the work remotely. Virtual reality is used to train and qualify staff cheaply and with precision. Other digital technologies, including 3D printing of ‘hard to source’ parts, have been employed to improve plant condition, equipment, operations, maintenance and security.
New digital technologies are not the sole reason Candu performance is improving. Research, under way for decades, has brought plant, equipment and human performance improvements. It has also validated safety cases and helped engineers and nuclear scientists understand how to improve conditions for life-limiting components to keep them operating better and longer. In Ontario units have won regulatory approval for additional operation of several years; representing billions of dollars of additional revenue. The research has also demonstrated, and improved, plant safety for workers, the public and the environment thanks to years-long collaborative research projects between utilities, COG, and sometimes in collaboration with international Candu operating partners, and with research labs such as CNL, Kinectrics and Stern Laboratories.
COG will celebrate its 35th anniversary in 2019. President and CEO Fred Dermarkar says members collectively achieve breakthroughs that individual operators may not have been able to achieve alone. COG invests more than C$60 million in R&D on behalf of its members each year.
“Often, when people see a nuclear plant from the outside, they see exactly what they have seen since the plant began operating decades before,” Dermarkar says. “What they don’t realise is year over year, inside the plant, we have been innovating our approaches.”
Plant knowledge and work processes have evolved, Dermarkar adds. For example, COG’s databank has more than 44,000 pieces of operating experience available to its members. Industry suppliers are also participating in knowledge-sharing programmes to ensure both contractors and parts come to the job ready, with proper qualifications. “We have strengthened [plant] resilience, improved efficiency and also our own techniques in operating and maintaining,” says Dermarkar. “These are 21st-century operations.”
The Ontario plants have also expanded their mandate beyond electricity generation with further development in areas such as nuclear medicine. Bruce Power has signed partnerships with Kinectrics and Isotopen Technologien München and OPG with BWX Technologies to help develop their reactor by-products, including Colbalt-60, Lutetium-177 and Molybdenum-99.
In addition to collaborative research through COG, the utilities and CNL have recently developed independent centres of research and development in areas including reactor sustainability (CNL), advanced SMRs (New Brunswick) and at innovation centres capturing initiatives across CANDU, SMR and medical technologies (OPG and Bruce Power).
Collaboratively, with the Nuclear Energy Agency (OECD-NEA), COG is working to bring the national labs and utilities from all COG-member countries together to strategically develop and share research to take the innovation agenda further, says Dermarkar. COG and the NEA co-hosted a research symposium in Vienna on this global research collaboration initiative. Dermarkar says COG sees opportunities to share learnings across technologies as well.
Beyond technical solutions, there is a human element. There is increasing collaboration between academia and industry. There are programmes across all of the utilities to understand and integrate Indigenous knowledge, input and culture into decision-making. Organisations like North American Young Generation in Nuclear (NAYGN) and Women in Nuclear (WiN) Canada are also taking a greater role in informing policy.
The Canadian government’s choice of a new president and chair for its national nuclear regulator, Rumina Velshi, has set an agenda for greater gender balance in the industry. Leadership sets culture and the industry has taken notice.
There is expectation a new generation will take up the call to ensure nuclear is an important part of the future.
Four innovation incubators
Canadian utilities and the country’s national nuclear lab have each created a mechanism to tap into innovative ideas from their own employees, suppliers, their local community and partners at their individual sites:
• Ontario Power Generation has created X-Lab, an incubator space for exploring new ideas that come out of the nuclear plant and from OPG’s own employees. For example a new monitoring and diagnostic centre offers more effective condition-based maintenance through a cross-functional team that monitors the condition of plant equipment based on data provided from sensors;
• Bruce Power has created the Ontario Nuclear Innovation Institute as an international centre of excellence for applied research and training and has further developed its medical isotope production;
• New Brunswick Power has announced development of an Advanced Nuclear SMR Research Cluster to investigate commercialisation and a demonstration plant at Point Lepreau;
• Canadian Nuclear Laboratories has announced its Centre for Reactor Sustainability, which capitalizes on its 60 years of operation of the National Research Universal reactor and decades of research. It provides R&D and services for Candu/PHWR and light water reactor technologies.
To help the industry leverage the work done at these centres, the CANDU Owners Group is creating a mechanism for sharing ideas and, where it is valuable to do so, will create joint projects to further research done at one site, as a shared project among multiple members. |
What to Count on Once you Enroll the college of Accounting?
Accounting (or accountancy) is actually a issue place dependant upon the study of all elements of financial content and financial transactions. This willpower is likewise described as ‘the language of business’ due to the fact that it reveals the key ideas of its existing and performing. The overall economy of any corporation or organisation will probably fall short free of a reliable specialist responsible for accounting.
What does it mean to get an expert accountant? The traditional features might probably sound like ‘a mathematicalgenius’ or ‘a person with organizational superpowers’. Accountant is really a man or woman answerable for all personal affairs of your organisation, shmoop like procedures of figuring out, recording, measuring, classifying, verifying, summarizing, analyzing and interpreting this content.
Sounds pretty frightening! But nevertheless, in many providers these things to do are carried out somewhat by a crew of people than by an individual accountant. In spite of this, for a foreseeable future qualified during this sphere, you can not predict which obligation will lay down on the shoulders. So during your experiments you might find out how to history, keep and arrange monetary transactions, as well as guidelines on how to summarize and examine this knowledge. Also, you’ll learn how to report the info to tax selection entities and oversight businesses the proper way.
The technique of accounting has the conventional framework of suggestions described as commonly acknowledged accounting rules (GAAP). Accountants need to know and go along with these ideas making ready the money statements.
Modern accounting consists of a variety of sizable subfields:
- Financial accounting
This sphere scientific tests and reports of firm’s fiscal guidance to external end users. This method incorporates the preparation of economic statements presented for public intake. So all everyone, intrigued in acquiring these kinds of help and advice (buyers, suppliers and regulators) can easily get it.
- Management accounting
This a single is related with the processing with the knowledge (equally money and non-financial) expected for managers to undertake their job well and make appropriate choices for that company’s advancement. This subfield of accounting produces future-oriented stories (by way of example, the organization’s finances within the adhering to calendar year). Cost-benefit evaluation is considered the foundation for management accounting studies.
Auditing promotions with the means of verification and evaluation of your economical statements of a company. The operate for the auditor is usually to express his/her independent feeling relating to the fairness in the organization’s financial statements in accordance with the GAAP.
- Accounting knowledge technique (AIS)
This really is a branch of accounting connected while using the processing of economic and accounting information and belongs towards companies’ information and facts method. AIS is predominantly a computer-based technique for monitoring accounting action in combination with detail technology sources.
- Tax accounting
Each region has its private tax scheme that needs the usage of specialised accounting concepts for tax functions. So tax accounting deals using the process of preparing, analysis and presentation of tax payments and tax returns.
Nowadays, many folks go along with this well-paid and status profession. It is actually true that accountants will always be in remarkable desire mainly because they make a significant bit of show results for every corporation. The only real problem is usually that getting into this occupation just isn’t an easy activity. You will need to experience serious complexities with your method to a sought after diploma.
Students who prefer this sphere like a subject of their long term specialized action have got to devote a good deal of attempts and time to grasp all its factors. While you have already understood on the write-up above, accounting is focused on challenging things, rigorous regulations and calculations. Your professors provides you with a good deal of homework tasks to rack your brains barely. But never feel concerned any time you really feel puzzled with some groundwork paper or scenario study, you can still rely on pro essay providers. Learn new heights without distinctive attempts.
Would the title above cause you to click on it to discover more details on the intriguing failure of promotion endeavor? It sure would. It’s a trick which is implemented in multiple zones of our social lifestyle to catch the readers’ eye and make them interested by the written content.
Marketers occur up with tons of hints of a way to utilize the psychological areas to perform with human curiosity. When considering make the reader simply click relating to the website online, a very good headline that stands away from dozens is enough to make him interested. However, selling the service or companies, intending to offer them, can take a longer plus much more elaborate list of steps. That’s exactly where promoting arrives in hand to hit the competition on the advertise and bait the consumers.
Types and Ways of Advertising
Advertising could be a foremost software in the disposal with the manufacturers and corporations to provide their information for the visitors. Its target stage is to try to attract as so many shoppers and clients as you possibly can. Suspect of one’s most cherished commercials and inquire yourself what can make them so awesome and memorable. By examining essentially the most in style and profitable advertisements, it develops into simplier and easier to comprehend what methods you will use for your personal very own uses. Bear in mind Coca-cola Christmas truck advert within the Television set or Ronald McDonald going for walks in a very crimson wig within the shopping center?
They both of those have the same exact end goal of selling the product but use several types of advertisement. The abilities and tactics involved in publicizing are innumerous and depend on marketer’s creativity. Between the commonest types of promoting are:
- Television it without doubt addresses broader audience of consumers than booklets or brochures. Feel within the Tremendous Bowl adverts. Brands decide by far the most high-profiled, beautiful and amusing subject material for his or her halftime commercials. In case you ought to generate an essay on promoting or build your own personal advert and create an idea, we advocate researching many of these adverts. Make an attempt to examine not just what the heck is catchy in your case but what audience was intended to be dealt with. Choose out what was utilized to catch the attention of a selected target faction. Settle on whether or not it depends within the age, standing or occupation groups.
- Online in the present day essentially the most favored and straightforward source for promoting. Besides for the ads within the online pages, Internetoffers an outstanding prospect of endorsing products and services in social networking sites. In the event you stick to superstars on Instagram, you’ll find it not excluded you plan to observe your favorite, quality trying celeb’s type. Trend homes and types notice the strength of followers and use their famous clients’ accounts and images as 1 added source for promoting.
- Product placement does one every now and then ponder ‘Hey, they can be feeding on KFC in just about all the episodes on this show’? If that is so, you’ve got noticed an item placement variety of advertising. It is actually most often a concealed choice of advertising and marketing a product by positioning it with the movie or TV-show episode.
- Prints customarily consists of brochures from the avenue or leaflets and handouts inside the shops. When you’ve got ever received any advertisements in the mail box it is actually also an illustration of a printing kind of promotion.
- Outdoor it means any kind of promotion that you satisfy beyond your property. Recall from the billboards with your technique to school or show results, ads on buses, poster or immense electronic boards on Times Sq. in NY or Shibuya in Tokyo.
- Celebrity branding a large amount of celebs stimulate product or service on Tv or in journals. In case your beloved actress or singer claims from your display screen this individual mascara helps make her lashes glance lengthier certainly you tend to think, judging by her attractive pics. |
By 1925, Frank Phillips, Oklahoma oil baron and founder of Phillips Petroleum, was living comfortably in a 26-room mansion in the heart of Bartlesville, Oklahoma, and apparently thought it was time to branch out and construct a ranch home in the Osage Hills, 12 miles south of the city.
Phillips’ final product was no ordinary ranch, carved out in a six-square-mile strip of prairie with a plentiful supply of black oak trees. Today, nearly 100 years later, visitors from throughout the world flock here for meetings, social gatherings and for an opportunity to see how affluent oil people lived during the first half of the 20th century.
Phillips named the property “The Frank Phillips Ranch,” but later, at the suggestion of a business associate, the name was changed to “Woolaroc” (a play on words), representing woods, lakes and rocks, which plenty of each remains today throughout the 3,700-acre site.
The first structure built on the site was a sprawling log cabin, and with its completion, there was no stopping. The cabin is now called “the lodge” and includes five bedrooms, six bathrooms and two porches overlooking Clyde Lake. To say that the residence is stocked with art, animal heads and horns, and exotic rugs would be an understatement. Also exhibited is a grand piano, which dates to Phillips’ time on the property.
Completion of the property in the late 1920s led to an influx of the movers and shakers from that era, including humorist Will Rogers, actor Tom Mix, the Federal Reserve Board and Kansas Gov. Alf Landon (who himself was no stranger to the oil business). According to a book sold in the gift store, Landon launched his ill-fated campaign for the presidency at Woolaroc.
With no shortage of resources to complete his project, engineers with Phillips Petroleum designed roads, bridges, dams and spillways that circled the property and remain to this day. To reach the lodge, we drove 2 miles on a snake-like road, dodging buffaloes and turkeys, a zebra or two, and llamas, all of which seemed to be very much at home.
Phillips grew up on a farm in Nebraska where his family raised Holstein cattle. So it came as no surprise that he built a huge stone barn to house the Woolaroc herd. To get a better view of the ranch, Phillips constructed a staircase to the top of the barn, reaching an observation tower where visitors can view the ranch’s entirety.
From what we were told, Phillips’ promotional and marketing skills ranked right up there with drilling oil wells; so it came as no surprise that aviation and Charles Lindbergh’s flight across the Atlantic Ocean more than piqued his interest.
Phillips ordered Phillips Petroleum chemists to develop fuels suitable for airplanes. Three months following Lindbergh’s flight, all eyes were on California in anticipation of an airplane race from Oakland to Hawaii. Arthur Goebel Jr., an accomplished pilot, was interested in making the race and after contacting Phillips his participation was certain.
Goebel climbed aboard a plane (coincidentally named Woolaroc), and powered by 417 gallons of NuAviation Fuel manufactured by Phillips Petroleum, was airborne. Twenty-six hours after takeoff, he was declared the undisputed winner. Today, the plane is prominently displayed in a 40,000-square-foot museum that was built by Phillips at Woolaroc.
The plane is in good company. It’s displayed alongside one of the nation’s largest collections of Colt firearms, sculptures, western artwork, and the re-creation of an old-time Phillips 66 service station. An exact replica of Phillips’ Wall Street office, which didn’t seem all that ostentatious, can be viewed behind glass.
In 1926, Phillips established an annual Cow Thieves and Outlaws Reunion, a summer picnic on Clyde Lake that attracted plenty of both along with elites associated with the oil industry. It appears that all in attendance bonded well together and enjoyed each other’s company.
Phillips cast a long shadow over Bartlesville and his philanthropy is remembered to this day. Each year, he made a contribution to every church in town and assisted countless other civic causes. For reasons unknown, Phillips opened his first Phillips 66 service station in Wichita.
Six years prior to his death, Phillips and his wife, Jane, deeded his prized ranch over to the Frank Phillips Foundation. This organization continues to maintain the property to this day. Jane Phillips died in 1948, followed by Frank two years later. They, along with their son John, who died in 1953, are interred in a mausoleum that is nestled into a side of a hill on the ranch.
Phillips' reasons for constructing the ranch says it all.
“During my lifetime, I derived a great deal of pleasure in building Woolaroc Ranch and Museum,” Phillips said. “Through this medium, I tried to preserve and perpetuate a part of the country I knew as a young man.”
Richard Shank is a retired AT&T manager, is employed in the health care industry and has farming interests in Saline County. Email: [email protected]. |
A printing defect characterized by a spotty, non-uniform appearance in solid printed areas. Different print characteristics have different types of mottle; there is a density mottle, a gloss mottle, or a color mottle, depending on what aspect is being affected. All forms of mottle are typically the result of non-uniform ink absorbency across the surface of the paper. A mottled appearance is also called galvanized. A complex type of mottle is called back trap mottle. (See ["Back Trap Mottle [BTM]"].)
A type of mottle characteristic of calendered papers is known as coating mottle. |
The Power-2-Heat plant in Leopoldau converts excess electricity into heat and thus contributes to energy being used intelligently and more efficiently. The state-of-the-art facility can take up power of up to ten wind turbines and convert it into heat with virtually no loss. The resulting heat is fed into the Vienna district heating network in the form of hot water and can supply clean heat for up to 20,000 households.
Heat is an essential sector of the energy transition: it facilitates the integration of renewable electricity into the energy system. Wien Energie relies on innovative solutions – in December 2017, a Power-2-Heat plant in Leopoldau was put into operation. This state-of-the-art facility couples the power grid and the district heating networks to use energy intelligently and more efficiently. The integrated consideration of different sectors is essential for the energy transition. The Power-2-Heat plant in Leopoldau is about an unconventional coupling of the electricity and heat sectors. This makes sense ecologically, since fossil fuels are saved for heat generation. Heat can also be stored more easily and economically than electricity and is therefore an important part of an integrated energy system.
Energy from 10 wind turbines
The Power-2-Heat system is powered by clean excess electricity. An excess of energy arises, for example, from particularly strong wind. Wind turbines may then produce significantly more electricity than consumed at this moment. With the Power-2-Heat plant with a total output of 20 megawatts, Wien Energie is creating the opportunity to further consolidate its district heating network and at the same time is taking an important step in the energy transition. Wien Energie has the capacity to take in the power from up to ten wind turbines and convert it into heat with virtually no loss.
The new system is activated in the event of a power oversupply and absorbs the surplus. As a result, the system contributes to the stabilization of the power grid and enables the full use of electricity from renewable sources. The resulting heat is fed into the Vienna district heating network in the form of hot water and can thus be used directly and efficiently in the surrounding households in north-western Vienna. Up to 20,000 households can thus be supplied with clean heat.
This is how the Power-2-Heat system works
The production of renewable energy such as solar or wind power is difficult to control and weather dependent. Thus, in strong winds much more energy is produced than needed at that moment and there is an oversupply. In this case, the Power-2-Heat system is activated. In the plant itself, the excess electricity from the grid is used in electrode boilers to heat water. A heat exchanger feeds the hot water, which is approx. 160 degrees Celsius, into the district heating network. The facility in Leopoldau consists of two plants each with 10 MW output – these can be operated independently of each other. If one heater fails, the second can take over the load immediately.
The plants do not run in continuous operation, but only take power when there is an oversupply. In addition, the system also serves as a backup reserve for the district heating network in cold months. Here the plant operation directly follows the needs of the district heating network. The “mission order” comes directly from the load distributor of Wien Energie. The entire system is integrated in the control technology of Wien Energie and is operated and monitored from Spittelau.
Key data: Power-2-Heat plant Leopoldau
- two electrode boilers with 10 MW each and max. 12 bar
- heats water to over 160 degrees Celsius with excess electricity from the grid
- can supply up to 20,000 households with heat
- can take up electricity from up to 10 wind turbines
Key data: Wien Energie district heating
- Wien Energie supplies 380,000 households and 6.800 large-scale customers with district heat
- In its beginnings almost 50 years ago, the district heating network was 26 kilometers long and served the Vienna General Hospital and a few large community buildings. Today there are more than 1,200 kilometers of pipes (equivalent to the distance from Vienna to Paris) and district heat can be obtained in all 23 districts.
Wien Energie GmbH
This post is also available in: German |
Corporate social responsibility is not a new concept in India . However, what is new is the shift in focus from making profits to meeting societal challenges.
Giving a universal definition of corporate social responsibility is bit difficult as there is no common definition as such.
However, there are few common threads that connect all the perspectives of CSR with each other; the dedication to serve the society being most important of them. Most ideal definition of corporate social responsibility (CSR) has been given by world business council for Sustained Development which says, "Corporate Social Responsibility is the continuing commitment by business to behave ethically and contribute to economic development while improving the quality of life of the workforce and their families as well as of the local community and society at large".
Thus, the meaning of CSR is twofold. On one hand, it exhibits the ethical behavior that an organization exhibits towards its internal and external stakeholders (customers as well as employees). On the other hand, it denotes the responsibility of an organization towards the environment and society in which it operates.
Benefits of Corporate Social Responsibility
Corporate social responsibility offers manifold benefits both internally and externally to the companies involved in various projects. Externally, it creates a positive image amongst the people for its company and earns a special respect amongst its peers. It creates short term employment opportunities by taking various projects like construction of parks, schools, etc. Working with keeping in view the interests of local community bring a wide range of business benefits. For example, for many businesses, local customers are an important source of sales. By improving the reputation, one may find it easier to recruit employees and retain them. Businesses have a wider impact on the environment also. Plantation and cultivation activities taken up by Intel India are a step towards the same. Recycling used products also acts as a step towards minimizing wastes.
Internally, it cultivates a sense of loyalty and trust amongst the employees in the organizational ethics. It improves operational efficiency of the company and is often accompanied by increases in quality and productivity. More importantly, it serves as a soothing diversion from the routine workplace practices and gives a feeling of satisfaction and a meaning to their lives. Employees feel more motivated and thus, are more productive. Apart from this, CSR helps ensure that the organization comply with regulatory requirements.
CSR Importance and its Relevance Today
The amount of information available to customer about the company, product, brand globally through easy accessible and available mode of information; internet, communication, customer wants to buy product from trusted brand, employee want to work for the company who respect them, NGO's want to work with company who work with the same vision for the benefit of the people. As said by Peter Duker "The 21st century will be the century of the social sector organization. The more economy, money, and information become global, the more community will matter." (Corporate watch report, 2006).
According to strategic corporate social responsibility by William B. Werther, David Chandler there is three trends which are going to have importance in future are
Increasing Affluence : Customer from elite level can afford to buy and pay more for premium brand but the poor customer might not be willing to pay so much for brand, instead they would prefer to spend their money on business which can take their business to much better level.
Changing social expectation : Its natural that customer expect more from the company whose product they buy but with recent controversy and scandal of company has reduced the trust and confidence in the regulatory body and organization which manage the corporate.
Globalization and free flow of Information : With growing trend of media and easy access to information through mobile, TV even the minor mistake of the company is brought in public in no time, this sometime fuels the activist group and likeminded people to spread message which can lead to situation like boycott of the product.
There can be few key steps to implement CSR successfully (Corporate Social Responsibility, 2003 )
• Better communication between top management and organization
• Appoint for CSR position.
• Good relationship with customer, supplier, stakeholder.
• Annual CSR audit.
• Feedback process
It can be concluded that in today's informative world where information are readily available to general public CSR has been an important part of any organization to be successful. Organization in present world cannot be successful without taking into account the social responsibility. CSR has been a vital component for any organization to have perpetual success and to create brand. |
To be Uploaded
Given this shift, there has arisen different ways of conducting business – new work patterns and practices have emerged for which there has been the birth of a new breed of worker with new skills and abilities. And in his new world order, not only has knowledge become crucial, the meaning of knowledge and the way it is used has also changed from the earlier ages.
Knowledge in the earlier ages
In the agrarian age there was very little formal education, people learned or gained knowledge by going about their daily jobs, in the community. Their knowledge comprised of finding out how to “do things”. In the industrial age, there was a shift to formal education in schools. Knowledge was knowing “what had to be done”. Schools taught skills required for this including social and citizen skills. Education was managed by a bureaucratic system and students followed the rules of acquiring knowledge and they lived by societal norms. The efficient functioning of the society was given greater importance than individuals, who were all bred from a similar mould.
Knowledge in the knowledge age
In the knowledge age, the very meaning of the term “knowledge” has shifted. It is no longer about “things that are known” that are taught by experts in the field, things written about in books under different heads and subjects. It is no longer about just information and things stored in peoples’ minds. It has gone beyond that and become a force, an energy, a system of interconnected networks that does things, that causes things to happen. Knowledge is not static, it is a kind of dynamism that achieves objectives, that works towards a goal. Knowledge is produced and used by people who have complementary expertise who work together towards a specific target to achieve something specific together.
Have to discover, evaluate, use
In the knowledge age, people still need and have to be taught things that are known, like in the previous industrial age but the knowledge itself is not the be-all and end-all. Knowledge is what allows one to think, learn, change and move forward. People in the knowledge age have to discover, evaluate, use information on a continuous basis. They have to communicate and share the information and knowledge and work efficiently with others. They have to think and learn by themselves and adapt, and change basis this knowledge. They need a different skill-set to become true citizens of the knowledge economy.
Education systems as they exist do not impart these skills – they need to change their attitude and teaching techniques to teach students to think for themselves, not be spoon-fed. To understand the new meaning of knowledge, to appreciate the context in which it works, to use the ideas, information and knowledge that they have to create and innovate, to share and grow their own knowledge. We thus need a shift in our mind-set to harness fully the power of the knowledge age. |
The Need for Compaction Improvement
Compaction of an asphalt mix has been a standard in the building and functioning of the pavements for a long period. It has commonly been assumed that the results of compaction lie in the ensuing density and air voids,and are not due to construction techniques. It has also been considered that the cracks induced by construction only appear unattractive and do not actually affect the physical performance of the surfacing.
Extensive research has been conducted to establish the real cause and effect of the construction cracks. It was established that compaction by steel roller is responsible for construction-induced cracks due to the geometric unsuitability between the roller and the base. Damage and cracks in roads are not only due to cold weather because these issues have also been observed on roads in countries with warm weather.
Design of the Asphalt Multi Integrated Roller (AMIR)
The AMIR has a belt with multiple layers that are made of specific rubber compounds.The belt and rollers create plate-like contact surface of approximately three square meters to use for compaction. The rubber belt, being elastic, provides a surface that is similar in stiffness to the asphalt surface. Since the contact area available is large due to the flat plate, the applied stress at the asphalt is insignificant compared to that applied by conventional steel rollers. Furthermore, if the rolling speed is same, the load duration by AMIR is thirty times greater than normal steel rollers. The huge contact area reduces horizontal forces applied to the asphalt and ensures confinement during the compaction. Thus, the roller induced cracking is eliminated, permeability is reduced, and resistance to fatigue damage and tensile strength are increased.
Test Results of Asphalt Multi Integrated Roller
AMIR has overcome compaction problems and produces a smooth surface that does not have cracks. Numerous fatigue tests were
performed on different mixes. Comparison was carried out of the compaction produced by the normal rollers and the Asphalt Multi Integrated Roller. The contributing factors like direction and kind of mix were varied to obtain results under different test conditions. The test results demonstrated that compaction by the Asphalt Multi Integrated Roller, using either form of mix, produced approximately twice the compaction fatigue life. It was also observed that the rolling direction had an insignificant effect on the fatigue life of the mix produced by Asphalt Multi Integrated Roller. However, the fatigue resistance to transverse cracking was low for the mix produced by steel rollers. The research has concluded that the cracks produced by steel rollers can decrease the fatigue life considerably, and these problems can be overcome by the use of Asphalt Multi Integrated Roller. Furthermore, the mechanical properties like density, fatigue life, tensile strength and moisture resistance are improved considerably.
Problems Conventional Steel Drum Rollers
Cracks produced during compaction by normal steel rollers are due to the non harmony of the rigid circular steel drum with the gentle, even asphalt pavement. Since the shape of the steel drums is circular, only a small contact area is accessible to the asphalt for
compacting. With the movement of the roller, the asphalt at the roller front is pushed forward, producing a dragging force in the asphalt at the rear. This action of pushing and dragging produces cracks in the asphalt. The rollers with rubber tires were introduced to press the asphalt and seal the cracks. However, such rollers have not been successful to prevent the development of cracks. Furthermore, these rollers use water to avoid picking asphalt during the compaction. This lowers the temperature of the asphalt and avoids the crack closing. |
Based on data from the World Bank, in 2011 natural gas burned as flare gas reached more 150 billion cubic meters (bcm) and added about 400 million tons of annual CO2 emissions. The first rank of the biggest fuel-gas producer is Russia with 37.4 bcm per year followed by Nigeria with annual production of 14.6 bcm. While Indonesia ranks fourth with an annual burning gas flame of 2.2 bcm. Several countries have started to pay attention to the impact of this flare gas. The country is making policies and regulations to reduce the flare gas of which are Canada, Norway, Russia, Kazakhstan, and Qatar.
According to the data, Indonesia, which is one of the world’s leading gas-burning countries through the National Planning Agency (Bappenas) has sought to reduce the production of flared gas flares through “the Indonesian Climate Change Sectoral Roadmap 2010″. In its implementation, it is supported by the National Action Plan for Emission Reduction and regulated by Presidential Regulation No. 61/2011. The goal is that the data of flare gas can be obtained accurately and structured. In principle, oil and gas contractors must seek government permission to carry out gas burning. This is in accordance with the Decree of the Minister of Energy and Mineral Resources of the Republic of Indonesia No. 31 Year 2012 on the Management of Gas Flares in Oil and Gas Processing. But this ministerial decision does not regulate the penalties or fines to companies that violate regulations. The rules are only normative.
It is necessary to formulate specific mechanisms related to economic instruments and other fiscal systems so that existing regulations are more effective and efficient. Such as incentive or disincentive arrangements imposed on gas producers and processors in more detail.
Based on the analysis of Low Carbon Support Program to Ministry of Finance Indonesia most of the flaring takes place offshore. Offshore Production Sharing Contracts (PSC) account for about 48.2% of gas combustion and 25.6% are produced by terrestrial PSC. Pertamina & Partners and Joint Operating Body for Production Sharing Contracts contributed 21.2% and 5.0% respectively. Flare gas data according to the company showed that five major flue gas producers produce 76.4% of the total production of national flare gas. These companies are BP Tangguh, Pertamina & Partners, Petrochina International Jabung Ltd, CNOOC SES Ltd, and Total Indonesia. While the top 10 and 15 companies contributed 87.5% and 93.2%.
A positive relationship between the size of the oilfield and the associated gas produced shows that the total flare gas is not the only indicator of efficiency. The application of Gas Flaring to Oil Production Ratio and Global Gas Flaring Reduction will be more relevant and provide additional information. The problem is that oil and gas are generally produced simultaneously in the production process, making it very difficult to separate the flare gas from oil production by flare gas from gas production.
To reduce the production of flare gas, the following things can be done:
1. Flaring can be implemented within certain limits for safety and maintenance purposes.
2. The polluter pays principle applies in policy formulation because the cost of pollution is directly charged to those who should bear.
3. Variations in emissions to barrels of oil equivalent between one place and another are large enough to require considerable investment discrepancies to mitigate emissions.
4. The utilization of revenues from fiscal disincentives is an important system to ensure fairness and investment funds.
From all the above descriptions we can conclude that there are three policy recommendations to minimize the flare gas that can be done by the government: commercialization of flare gas through regulations, enforcing government regulations to oil and gas companies, and disincentive fiscal use. |
Waste Management & Clean Energy : Production from Municipal Solid Waste Hardback
Waste-to-Energy is one of the key technologies for sustainable waste management.
The book by Laura Mastellone offers a comprehensive overview of the various processes for thermal waste treatment such as incineration, pyrolysis, and gasification.
It is instrumental for understanding objectives, functioning, residues, and environmental impacts of thermal processes.
This is worthwhile reading for any expert in the field of resources and waste management.
- Format: Hardback
- Pages: 281 pages
- Publisher: Nova Science Publishers Inc
- Publication Date: 01/06/2015
- Category: Waste management
- ISBN: 9781634638272 |
- Category: Renewables
- Energy type: Biomass
- Project type: Asset
SSE Barkip anaerobic digestion plant is the largest combined organic waste treatment and energy generating facility in Scotland. This innovatively designed plant can process up to 75,000 tonnes of organic and food waste annually and produce 2.2MW of renewable electricity 24/7 from the biogas produced. Bacteria break down the waste to produce methane rich biogas, combusted in gas engines to generate electricity. All the heat used in the process is recovered from the engines. The plant produces a low cost PAS110 fertiliser to support local agriculture.
The facility is a zero-waste solution and has a major role to play in meeting Scotland's renewable energy production and waste recycling targets. |
Lubricating grease is mainly lubricated, protected and sealed. Most greases are used for lubrication, called antifriction Grease. Antifriction grease mainly reduces mechanical friction and prevents mechanical wear. At the same time also to prevent the protection of metal corrosion, and sealing dust-proof effect. Some grease is mainly used to prevent metal rust or corrosion, known as the protection of Grease. For example, there are a few greases, such as industrial Vaseline, that are designed for sealing, called sealed greases, such as thread grease. Greases are mostly semisolid substances with unique fluidity. The working principle of grease is that the thickener will keep the oil in the position where it needs lubrication, and when it is loaded, the thickener will release the oil, thus playing a lubricating role. At room temperature and stationary state it is like a solid, can maintain its own shape without flowing, can adhere to the metal without slipping. It can flow like a liquid when it is at high temperature or subjected to an external force exceeding a certain limit. When the lubricating grease is subjected to the shearing action of the moving parts in the machine, it can produce flow and lubricate, and reduce the friction and wear between the moving surfaces. When the shear action is stopped, it can restore a certain consistency, the special fluidity of grease, it is decided that it can not be suitable for lubricating oil parts. In addition, since it is a semisolid substance, its sealing effect and protective effect are better than that of lubricating oil. |
By: Tracey Levison, Managing Partner, Above + Beyond Management Consulting
Different Types of Smarts
For over a century, people and businesses alike have been measuring success through the Intelligence Quotient (IQ), a test that focuses mostly on raw intelligence, language, and mathematical abilities. Granted, IQ tests are good at measuring certain mental faculties like logic, abstract reasoning, learning ability, and memory, but fail to evaluate the social awareness required to know when and where to apply one’s skills in real-life situations.
That’s where Emotional Intelligence (EQ) comes in. In a nutshell, EQ encompasses the the ability to perceive, control and evaluate the emotions that influence how people engage with their day-to-day lives. It is a measurable component of who we are, just like IQ. Basically, if IQ measures how smart you are, then EQ determines how effectively you can use your smarts.
While some people perceive it as innate, EQ competencies are developed over time. As we age and experience life, we tend to grow more emotionally intelligent.
EQ: What is it made up of?
EQ isn’t a skill only practiced when dealing with others; it begins internally. In our coaching and self-awareness assessments, we break down emotional intelligence into 6 measurable scales:
1) Mood Labelling:
Mood Labelling measures the ability to accurately label feelings and emotions, and the extent to which someone can interpret their feelings as they occur. It reflects whether someone has developed a language for their emotions and is able to communicate as such.
This ties strongly to self-awareness, which is the competency that heightens self-confidence and helps determine one’s values and belief systems. This is important, because it gives us the ability to identify our own strengths and weaknesses.
2) Mood Monitoring:
Mood Monitoring measures the amount of energy someone puts forth in monitoring their feelings and emotions. It reflects how much thought one puts into their actions, the results of their actions, their mood, how they might be perceived, and generally how one feels.
When a person has very high monitoring, sometimes this is reflected in worrying behaviour. For this person, it is important to focus on managing their worry, being more mindful, and not getting ‘lost’ in self-reflection. When this is low, it might require a person to spend more time reflecting on things that occur and the results of their behaviours.
3) Self Control:
This measures the control one has over their feelings and emotions. This also provides insight into impulse control, which is important for coaching and leading others, and for working as part of a team.
4) Managing Emotional Influences:
This measures a person’s ability to stay neutral in highly emotional situations. It reflects how ‘swayed’ one gets by both their own emotions and the emotions of others, and their ability to persevere toward their goals in the face of these emotions.
Empathy reflects a person’s ability to understand the feelings and emotions of others. It is a reflection of how fully one listens to someone else’s situation and is able to validate what they hear. Empathy is an important quality in understanding others and establishing strong relationships.
6) Social Judgment:
This reflects a person’s ability to make appropriate decisions in social situations, based on the emotional states of others. It is a reflection of the finesse and attunement one has in a social situation.
Emotional intelligence, at its core, is all about self awareness and the awareness of others. Recognizing the importance of EQ is just the starting point for creating a new level of positive experiences personally and with others, both in our personal lives and our professional ones.
EQ and Effective Leadership
Every person in an organization has a unique set of skills and diverse passions. When you empower people and guide them on how to best apply those skills and passions, you’re going to have a company full of very engaged people.
That’s why an emotionally intelligent leader focuses on the behaviours that elevate the people in their organization. They actively encourage empowerment, kindness and appreciation. They do things like greet their team when they get to work in the morning, provide mentorship and coaching, celebrate success on a regular basis, recognize effort, and make time for others.
By recognizing people and caring about their well-being, leaders can effectively drive motivation and satisfaction across their teams, leading to better results and a happier culture.
How well do you understand your EQ? Contact us today to learn about how our assessments and coaching can elevate your EQ potential. For more information, click here. |
However if we look at the world of manufacturing, we can see that it is possible to create highly customised products on a production line. This can be achieved by decentralising the assembly process. Just look at the range of options available when you buy a new car. There are hundreds of possible configurations. If each car had to be built from scratch to order, auto manufacturers wouldn't be able to offer this service. It would take too long and be too costly. Instead they make the car up of many modules which are often subcontracted to other specialist manufacturers. These can then be combined in many different configurations on the assembly line as orders come in. This speeds up the process and enables more flexibility.
With a product as complex as a car (or a building) this is only made possible by the use of design software. This software allows products to be built virtually, with part and module interfaces modelled well in advance of assembly.
Here's a great example of decentralised manufacture in the building industry: |
What is Carbon Fiber?
Carbon fiber is a super high strength and stiffness material. Carbon fiber is 2-3 times as strong and stiff as steel or aluminum of equal weight.
Carbon fiber is produced by carbonizing (cooking at 20000-30000 F) a precursor material, usually Rayon, PAN (polyacrilonitrile) or oil Pitch. This process produces fibers that are 0.001mm in diameter. A bundle of 1000 of these fibers will be 1mm (0.040”) in diameter. These bundles are combined to produce unidirectional tape and various styles of woven and knitted fabrics. The carbon fabrics can then be combined with a plastic polymer (resin) to produce very light, strong, durable and beautiful parts.
During your carbon fiber fabrication training you’ll learn various methods of the carbon fiber fabrication process.
What is a Composite?
A composite is an object composed of two or more materials with combined properties greater than that of any of the single elements. Composite structures, as we define them, are a combination of fiber reinforcement and a resin polymer matrix. The fiber reinforcement is most commonly Glass, Carbon or Aramid. These composite structures may also incorporate core material to add stiffness. Cores might be balsa wood, foam or aramid or aluminum honeycomb. The combination of these composite materials can produce some of the lightest and strongest structures possible.
Industries You Can Enter with Composites & Carbon Fiber Training
• Performance Vehicle (Automotive, Sports/Racing, Off-Road & Energy Efficiency)
• Aerospace & Aviation (Aircraft Components, Satellites, Helicopters, Drones)
• Sporting (Bicycles, Canoes, Paddles, Snowboards & Skis, Surfboards, Football Helmets, Shoes and Cleats, Hockey Sticks)
• Green/Solar Energy (Wind Turbine and Wind Turbine Blades & Support Structures)
• Boating & Marine (Boats, Keels, Rudders, Masts & Rigging)
• Instruments (Cellos, Violins, Guitars, etc.)
• Medical (Operating Tables & Surgical Instruments)
• Transportation (Car and Truck Bodies and Fairings)
Composites & Carbon Fiber Training Processes & Methods
At the conclusion of your composite training which includes carbon fiber fabrication training at IYRS, you will be proficient in:
• Vacuum Bagging & Vacuum Infusion
• Open & Closed Molding Process
• Filament Winding
• In-Mold Coating (Gel Coat)
• Prepreg Processing
• Spray-Up Molding
• Safe Use & Operation of Curing Oven
• Composite Molds & Plugs Building
Career Opportunities & Research Resources
There is a groundswell of support for composites & carbon fiber training that is actively forming in Rhode Island and across the country, and IYRS is smack dab in the middle of it. According to the American Composites Manufacturers Association, the composites industry is projected to grow by 6.5 percent annually through 2020. Capitalize on this opportunity with the most comprehensive composites training program in the country!
• National Organization: American Composites Manufacturers Association (ACMA)
• Local Organization: Rhode Island Composites Alliance (RICA)
• National Conferences: CAMX Composites & Advanced Materials Expo
• Certifications: Certified Composites Technician (CCT) through ACMA (Cast Polymer, Compression Molding, Corrosion, Instructor, Light Resin Transfer Molding, Open Molding, Solid Surface, Vacuum Infusion and Wind Blade Repair) |
VHS English Version Product Number:
V000031VEMDVD English Version Product Number:
V0000319EMVHS Spanish Version Product Number:
V000031VSMDVD Spanish Version Product Number:
General Safety & HealthLength of Video(in Minutes):
MARCOM Group Ltd., TheDescription:
MARCOM's "Industrial Fire Prevention" Videotape Program looks at fires in an industrial setting and reviews the steps an employee should take in a fire emergency. This course covers:
- Preventing industrial fires.
- The concept of "flashpoint".
- Handling flammable materials.
- ...and more.
The Videotape Program comes with a comprehensive Leader's Guide, reproducible Scheduling & Attendance Form, Employee Quiz, Training Certificate and Training Log.
Industrial Fire Prevention Video and DVD Excerpt: Since the dawn of man fire has been a powerful tool. Allowing us to cook food, keep warm, and illuminate our surroundings. But throughout our history fire has also been a devastating destroyer. Even with modern firefighting techniques, accidental fires are still a leading cause of destruction disability and death. And today's fires are more dangerous than ever, because of plastics, chemicals, and other man made substances can cause fires to spread quickly, as well as spread toxic fumes. The best way to fight a fire is to prevent it. But first you must know what causes things to burn. All fires involve three elements, heat, fuel and oxygen. Removing any of these, will stop a fire. Let's look at each of them in great detail. Fires start with heat, which serves as a source of ignition. Heat can be generated by many things, including open flames, static electricity, cutting and welding operations, faulty electrical circuits, unshielded hot surfaces, friction and chemical reactions. Once a fire is burning it produces more heat and grows even larger. As long as there is enough fuel and oxygen a fire will continue to spread. |
What is Business Memo in Communication? Functions of Memorandum, Importance of Memorandum, Business Memo. Memo is the short form of memorandum. The literal meaning of the word Memorandum is a note to assist the memory. A memo is a shot piece of writing (short letter), generally used for internal communication between executives and subordinates or between the officers of the same level of an organization. It is also called as inter-office memorandum.
What is Business Memo
According to RC Sharma and Krishna Mohan, “A memorandum is a short piece of writing generally used by the officers of an organization for communicating among themselves.”
Rajendra Pal and Korlahlli say, “A memo is used for internal communication between executives and subordinates or between officers of the same level. It is never sent outside the organization.”
According to Lesikar and Petit, “Memorandums of course are the letters written inside the organization, although of few companies use them in outside communication.”
From the above discussion we can conclude that a memo is a short writing used between executives and subordinates or between the officers o the same level of an organization. Business Communication
Importance or Functions of Memorandum
Memorandum is one of the major important tools used for internal communication. It is widely used for communicating people within the organization. It contains information on routine activities of an organization and is used for different purposes. The functions of memorandum is discussed below from different perspectives-
- Presenting Informal Report: Sometimes memo is used to present informal report to superiors. Informal reports are usually short and informational and are presented in memo form. Findings and recommendations are presented by such memo which helps managers take proper decision.
- Providing Suggestions and Instruction: Memorandum is very useful for proving suggestions and instructions to the subordinates. Managers and supervisors use it to give necessary suggestions and invaluable instructions to their subordinates so that they can perform their activities properly.
- Providing Response: Memo is also used to provide response to any request made by the same. Sometimes superior requests someone to perform particular job and response is also requested to send to the undersigned through the same.
- Seeking Explanation: Office memo is often used to seek explanation from certain person on particular issue. In the organization there may be some misunderstanding or mishap between persons and superior may want to have explanation on such event so that corrective measure may be taken.
- Making Request: Memo is also used to make any request to different parties in the organization. it is frequently used by the managers and subordinates requesting others for attending any meeting, executing any action, soliciting favor or for some other purpose.
- Conveying Information: Memorandum is widely used to convey information on different affairs to the people working in the organization. New policy, change in existing policy, any decision, appointment of manager, clarification, modification, announcement etc are communicated with concerned parties by memorandum. So, memo performs the function of conveying information to people within the organization.
- Solving Problems: Memo can also be used for providing solution to particular problem. Sometimes managers and supervisors issue memo to provide necessary instructions to the subordinates for better performing their daily activities.
From the above discussion we find that memorandum performs different functions to carry out the purpose for which it is used. Actually it is used for different purposes in the organization. There are more information about What is Business Letter? Objectives of Business Letter
- 63Functions of Business Communication is the process of transferring information and understanding from one or more people to one or more people. And in the process of exchanging information several functions are performed. The functions of Business Communication or function of communication are discussed below: Functions of Business Communication Helping in…
- 52Importance of Communication in Business -Communication in Business the history of Communication is as previous as the history of human being society. The importance of Business Communication is indispensable in case of social and business life. It is careful as the lifeblood of business. No business can develop in lack…
- 47Different Types of Communication in Business means exchange of information between or among various parties. During the exchange of information, communication process involves different parties, takes different flows, uses different media, maintains some formalities and intends to attain different goals. So, Business Communication can be categorized into different groups depending… |
The superior lightness, durability, and elasticity of steel over iron renders it more suitable for many of the uses te which we put that metal, and one of the last substitutions that has been made is the construction of ships of steel. It is a well-known fact that within certain limits crank ships sail better than steady ones, because of their superior elasticity, and they give to the impact of the waves, and glide through the opposing forces, when a steadier and safer ship would inflexibly receive the whole force, and not move an inch. This fact having been considered, the homogeneous metal, which is a sort of halfway house between steel and iron, is being largely employed in ship-building, and there are now in England many in the course of construction. The first vessel ever built of steel was the small steam launch for the Livingston Expedition up the Zambesi river, and another one, the Rainbow of 160 tuns has just been launched from Mr. Laird's works on the Mersey, which is intended for the navigation of the Niger.
This article was originally published with the title "Steel Ships" in Scientific American 13, 49, 390 (August 1858) |
Recycling is good. You should do it.
According to a 2010 report from the Environmental Protection Agency (EPA), America recycled and composted just over 85 million tons of municipal solid waste, which provided an annual benefit of more than 186 million metric tons of reduced CO2 emissions, comparable to the annual greenhouse gas emissions from over 36 million cars. Good for us!
But here’s the thing: The amount of recycling that could be happening in America is pathetic, compared with what we’re currently doing. And it’s only gotten worse. Here’s the relevant chart from the EPA, showing the percentage of total municipal solid waste that gets recycled, in orange. After a big uptick in the late ‘80s we’ve slowed down considerably, and have basically been stuck at 34% since the recession.
Thirty-four percent puts us at 18th in the developed world, according to Forbes. Not terrible — somehow Canada performs even worse — but there’s clearly a long way to go.
And this just includes all recycling. The rate for the kind of recycling we are mostly familiar with, for beverage containers, actually declined between 1990 and 2010, according to the Container Recycling Institute (CRI).
In 2010, Americans wasted (i.e., landfilled, incinerated, or littered) almost two out of every three beverage bottles and cans sold, CRI says. For aluminum cans alone, we landfilled, incinerated, or littered enough to reproduce the world’s entire commercial air fleet 25 times, according to the Institute. Between 2001 to 2010, the value of wasted beverage container materials exceeded $22 billion, the Institute says.
A Pepsi rep confirmed to me in an email that its North American bottles and cans contain approximately just 10% of recycled content on average. Bridget Croke, a representative for the Closed Loop Fund, an initiative founded by beverage companies to improve their recycling stats, acknowledged that the currently available volumes of recyclable material are inadequate to allow beverage companies to make more of their products out of such material.
“If we can increase the amount of material, and types of materials, recycled, these companies would then see increase in volume, and it would make it easier for them to purchase that material,” she told me.
As a result, even if overall recycling rates were still increasing before the recession, they apparently had no impact on the overall amount of waste being generated, according to EPA data. This is relevant for places like New York City, which now aim to have zero waste.
“The failure to recycle nearly two out of every three containers sold in the United States has monumental environmental impacts,” the Institute says, because they must be replaced in product stream with new containers made from virgin materials whose extraction and processing require more energy—and generate more pollutants—than making containers from recycled material.
Nor is the burden being shared equally; there are specific regions of the country, like the South and Mountain West, that do little to no recycling whatsoever, according to a 2008 study by nonprofit group BioCycle and Columbia University.
So, why have we stalled out? The most recent flatlining is partially due to the global economic slowdown — there is now less demand for raw materials, and as I’ve written previously, you need such demand to make local recycling programs profitable.
But according to experts, the broader problem stems from the patchwork nature of recycling programs, which is the way it’s been since recycling first took off in the ‘70s.
“Waste management falls to the municipal level,” Darby Hoover, a recycling expert at the National Resources Defense Council, told me. “You’ve got cities trying to manage waste with local policies and local infrastructure, and they’re subject to, hyperlocal variables, so it’s hard to establish things that are larger scale policies.”
She pointed out that even San Francisco, Oakland, and Berkeley, all progressive bastions within minutes of each other, each have different recycling regimes.
“If there’s little bits of material you’re getting from a lot of different places, it’s not super efficient,” Croke agreed.
And what city and county efforts there are have been confounded by changing consumption habits that have put a premium on complex, single-serve containers, Hoover said. While that has put a dent in the overall mass of solid waste (namely, food waste), it ultimately means more and more packaging that is less recyclable.
Municipal efforts to boost recycling rates have actually ended up adding to this problem. To alleviate the burden on residents of having to start all their recycling, many cities and counties have switched to “single stream” recycling, where all material gets deposited into a single container. But this also has the effect of reducing the overall amount of recyclable material available, Hoover said, since not all of it ends up getting properly sorted even in processing facilities.
“You’ve traded off participation for value of material,” she said.
For the CRI, the solution to both problems is easy. The states with the highest, most profitable recycling programs all have deposit-return systems—the programs that pay you $0.05 and $0.10 per bottle or can returned, currently found in just 11 states. There, CRI says, container recycling rates range between 66% and 96%.
The institute estimates that if a 5-cent deposit rebate were placed on all carbonated and non-carbonated beverages throughout the United States, a 75% recycling rate would be achieved across the board. If the deposit were 10 cents or higher, 80%-90% recycling rates would be achieved.
Such programs “can poll higher than motherhood and apple pie,” according to CRI head Susan Collins, but are opposed by food and beverage companies for the costs they impose.
“Whoever is handling this material has to pay for transportation, and that’s true of everything that gets thrown away,” she said. “And private companies that sell these materials don’t want to pay.”
We could simply accept our waste problem and do things like build more waste-to-energy facilities, which convert treated refuse into electricity. These have already proliferated in Europe; as Fusion contributor Cole Rosengren has noted, Paris, Vienna, and Copenhagen all create significant portions of their district heating from incinerators located within city limits.
But Collins told me not to despair and to continue recycling. Indeed, I spoke briefly with the head of the largest recycling center in North America, recently completed in Reno by Republic waste services, who said that despite the sluggish global economy they are still getting by with what is coming in.
Collins did advise, though, that to truly change the tide and deliver the powerful climate benefits recycling can produce, we should start getting the ears of state legislators.
“In the halls of power, the voices of people aren’t quite as loud as the voices of lobbyists,” she said.
Rob covers business, economics and the environment for Fusion. He previously worked at Business Insider. He grew up in Chicago. |
"The general principles underlying the design of concrete-steel structures are quite well known. Concrete itself is a structural material which is sightly, permanent, very strong in compression, thoroughly reliable when made honestly, almost fool-proof when once allowed to "set" properly, adaptable to an almost unlimited number of uses, practically fire-proof as well as water-proof, and in addition its cost is always very reasonable. The great objection to concrete is its lack of tensile strength, and likewise its lack of elasticity and toughness. Thus it is a fortunate circumstance that which is one of the least expensive of metals, and which possesses to a marked degree those qualities which plain concrete lacks, also has a coefficient of expansion which is almost identical with that of concrete. Thus steel may be imbedded sic in concrete in the proper place, manner, and amount, and the resulting combination call "concrete-steel" possesses the good qualities of both of the above mentioned materials, the steel supplying the tensile strength, while the concrete supplied the compressive strength"--page 1 -2.
Harris, Elmo Golightly, 1861-1944
Civil, Architectural and Environmental Engineering
B.S. in Civil Engineering
Missouri School of Mines and Metallurgy
i, 15 pages
© 1914 Enoch Ray Needles, All rights reserved.
Thesis - Open Access
Library of Congress Subject Headings
Concrete bridges -- Design and construction
Iron and steel bridges -- Design and construction
Print OCLC #
Electronic OCLC #
Link to Catalog Recordhttp://laurel.lso.missouri.edu/record=b2609836~S5
Needles, Enoch Ray, "A study of the economic design of short span girder type concrete-steel highway bridges" (1914). Bachelors Theses. 109. |
Karnataka • India
|Time zone||IST (UTC+5:30)|
|741.0 km² (286 sq mi)
• 920 m (3,018 ft)
|8,425,970 (3rd) (2011)
• 11,371 /km² (29,451 /sq mi)
• 8,499,399 (5th) (2011)
• 560 xxx
• +91-(0)80-XXXX XXXX
• IN BLR
• KA 01, KA 02, KA 03, KA 04, KA 05, KA 41, KA 50, KA 51, KA 53
Bangalore (Indian English: [ˈbæŋgəloːɾ]), officially Bengaluru (Kannada: ಬೆಂಗಳೂರು, ['beŋgəɭuːru]), serves as the capital of the Indian state of Karnataka. Located on the Deccan Plateau in the south-eastern part of Karnataka, Bangalore has an estimated metropolitan population of 65 lakh (6.5 million), making it India's third-most populous city and fifth-largest metropolitan area. Though historically attested at least since 900 C.E., recorded history of the city starts from 1537, when Kempe Gowda I, widely regarded as the founder of modern Bangalore, built a mud fort and established it as a province of the Vijayanagara Empire.
During the British Raj, Bangalore developed as a center for colonial rule in South India. The establishment of the Bangalore Cantonment brought in large numbers of migrants from other parts of the country. Since independence in 1947, Bangalore has developed into one of India's major economic hubs and today counts among the best places in the world to do business.. Several public sector heavy industries, software companies, aerospace, telecommunications, machine tools, heavy equipment, and defense establishments call Bangalore home. Known for a long time as the 'Pensioner's paradise', Bangalore today has received the appellation of the Silicon valley of India due to its pre-eminent position as India's technology capital. Home to prestigious colleges and research institutions, the city has the second-highest literacy rate among the metropolitan cities in the nation. As a large and growing metropolis in the developing world, Bangalore continues to struggle with problems such as air pollution, traffic congestion, and crime.
The name Bangalore represents an anglicized version of the city's name in the Kannada language, Bengalūru. A ninth century Western Ganga Dynasty stone inscription on a "vīra kallu" (literally, "hero stone," a rock edict extolling the virtues of a warrior) reveals the earliest reference to the name "Bengaluru." In that inscription found in Begur, "Bengaluru" refers to a battleground in 890 C.E. It states that the place belonged to the Ganga kingdom until 1004, known as "Bengaval-uru," the "City of Guards" in Old Kannada. An article, published in The Hindu states:
An inscription, dating back to 890 C.E., shows Bengaluru is over 1000 years old. But it stands neglected at the Parvathi Nageshwara Temple in Begur near the city… written in Hale Kannada (Old Kannada) of the ninth century C.E., the epigraph refers to a Bengaluru war in 890 in which Buttanachetty, a servant of Nagatta, died. Though this has been recorded by historian R. Narasimhachar in his Epigraphia of Carnatica (Vol. 10 supplementary) (1898), no efforts have been made to preserve it.
A popular anecdote (although one contradicted by historical evidence) recounts that the eleventh-century Hoysala king Veera Ballala II, while on a hunting expedition, lost his way in the forest. Tired and hungry, he came across a poor old woman who served him boiled beans. The grateful king named the place "benda kaal-ooru" (Kannada: ಬೆಂದಕಾಳೂರು) (literally, "town of boiled beans"), eventually colloquialized to "Bengalūru". Also theories abound that the name has a floral origin, derived from the tree Benga or "Ven-kai," also known as the Indian Kino Tree (Pterocarpus marsupium).
On December 11, 2005, the Government of Karnataka announced that it had accepted a proposal by Jnanpith Award winner U. R. Ananthamurthy to rename Bangalore to Bengaluru, its name in Kannada. On September 27, 2006, the Bangalore Mahanagara Palike (BMP) passed a resolution to implement the proposed name change, which the Government of Karnataka accepted, deciding to officially implement the name change from November 1, 2006. That process has been currently stalled due to delays in getting clearances from the Union Home Ministry.
After centuries of the rule of the Western Gangas, the Cholas captured Bangalore in 1024 which later passed on to the Chalukya-cholas in 1070. In 1116 the Hoysala Empire, overthrew the Cholas and extended its rule over Bangalore. A vassal of the Vijayanagara Empire, Kempe Gowda I, who built a mud fort and a Nandi Temple in the proximity of modern Bangalore in 1537, founded modern Bangalore. Kempe Gowda referred to the new town as his "gandu bhoomi" or "Land of Heroes".
Within the fort, the town divided into smaller divisions called petes (IPA: [peɪteɪ]). The town had two main streets: Chickkapete Street, which ran east-west, and Doddapete Street, which ran north-south. Their intersection formed the Doddapete Square — the heart of Bangalore. Kempe Gowda's successor, Kempe Gowda II, built four famous towers that marked Bangalore's boundary. During the Vijayanagara rule, Bangalore went by the name "Devarāyanagara" and "Kalyānapura" ("Auspicious City").
After the fall of the Vijayanagara Empire, Bangalore's rule changed hands several times. In 1638, a large Bijapur army led by Ranadulla Khan and accompanied by Shahji Bhonsle defeated Kempe Gowda III and Shahji received Bangalore as a jagir. In 1687, the Mughal general Kasim Khan defeated Ekoji, son of Shahji, and then sold Bangalore to Chikkadevaraja Wodeyar (1673–1704) of Mysore for 300,000 rupees. After the death of Krishnaraja Wodeyar II in 1759, Hyder Ali, Commander-in-Chief of the Mysore Army, proclaimed himself the de facto ruler of Mysore. The kingdom later passed to Hyder Ali's son Tippu Sultan, known as the Tiger of Mysore. Bangalore eventually incorporated into the British East Indian Empire after Tippu Sultan died in defeat in the Fourth Anglo-Mysore War (1799). The British returned administrative control of the Bangalore "pete" to the Maharaja of Mysore, choosing only to retain the Cantonment under their jurisdiction. The 'Residency' of Mysore State, first established at Mysore in 1799, later shifted to Bangalore in the year 1804. Abolished in 1843 only to be revived in 1881 at Bangalore, the Mysore State closed down in 1947 with the departure of the British. The British found it easier to recruit employees in the Madras Presidency and relocate them to cantonment area during this period. The Kingdom of Mysore relocated its capital from Mysore city to Bangalore in 1831. Two important developments during that period contributed to the rapid growth of the city: the introduction of telegraph connections and a rail connection to Madras in 1864.
In the nineteenth century, Bangalore essentially became a twin city, with the "pete," with residents predominantly Kannadigas, and the "cantonment" created by the British, with residents predominantly Tamils. A Bubonic plague epidemic in 1898 hit Bangalore, dramatically reducing its population. New extensions in Malleshwara and Basavanagudi developed in the north and south of the pete. The government laid telephone lines to help co-ordinate anti-plague operations, appointing a health officer to the city in 1898. In 1906, Bangalore became the first city in India to have electricity, powered by the hydroelectric plant situated in Shivanasamudra. Bangalore's reputation as the Garden City of India began in 1927 with the Silver Jubilee celebrations of the rule of Krishnaraja Wodeyar IV. Several projects such as the construction of parks, public buildings and hospitals had been instituted to beautify the city. After Indian independence in August 1947, Bangalore remained in the new Mysore State of which the Maharaja of Mysore served as the Rajapramukh. Public sector employment and education provided opportunities for Kannadigas from the rest of the state to migrate to the city. Bangalore experienced rapid growth in the decades 1941–51 and 1971–81 , witnessing the arrival of many immigrants from northern Karnataka. By 1961, Bangalore had become the sixth largest city in India, with a population of 1,207,000. In the decades that followed, Bangalore's manufacturing base continued to expand with the establishment of private companies such as Motor Industries Company (MICO; a subsidiary of Robert Bosch GmbH), which set up its manufacturing plant in the city. Bangalore experienced a boom in its real estate market in the 1980s and 1990s, spurred by capital investors from other parts of the country who converted Bangalore's large plots and colonial bungalows to multi-storied apartments. In 1985, Texas Instruments became the first multinational to set up base in Bangalore. Other Information Technology companies followed suit and by the end of the twentieth century, Bangalore had firmly established itself as the Silicon Valley of India.
Bangalore lies in the southeast of the South Indian state of Karnataka, in the heart of the Mysore Plateau (a region of the larger Precambrian Deccan Plateau) at an average elevation of 920 m (3,018 feet). The city sits at and covers an area of 741 km² (286 mi²). The majority of the city of Bangalore lies in the Bangalore Urban district of Karnataka and the surrounding rural areas form a part of the Bangalore Rural district. The Government of Karnataka has carved out the new district of Ramanagaram from the old Bangalore Rural district.
Bangalore has a flat topology except for a central ridge running NNE-SSW. Doddabettahalli sits at the highest point, 962 m (3,156 ft) and lies on that ridge. No major rivers run through the city, though the Arkavathi and South Pennar cross paths at the Nandi Hills, 60 km (37 mi.) to the north. River Vrishabhavathi, a minor tributary of the Arkavathi, arises within the city at Basavanagudi and flows through the city. The rivers Arkavathi and Vrishabhavathi together carry much of Bangalore's sewage. A sewerage system, constructed in 1922, covers 215 km² (133 mi²) of the city and connects with five sewage treatment centers located in the periphery of Bangalore.
In the sixteenth century, Kempe Gowda I constructed many lakes to meet the town's water requirements. The Kempambudhi Kere, since overrun by modern development, had been prominent among those lakes. In the earlier half of twentieth century, Sir Mirza Ismail (Diwan of Mysore, 1926–41 C.E.) commissioned the Nandi Hills waterworks to provide a water supply to the city. Currently, the river Kaveri provides around 80 percent of the total water supply to the city with the remaining 20 percent being obtained from the Thippagondanahalli and Hesaraghatta reservoirs of the river Arkavathy. Bangalore receives 800 million liters (211 million US gallons) of water a day, more than any other Indian city. Even with that abundance of water, Bangalore sometimes faces shortages, especially during the summer season in the years of low rainfall. A random sampling study of the Air Quality Index (AQI) of 20 stations within the city indicated scores that ranged from 76 to 314, suggesting heavy to severe air pollution around areas of traffic concentration.
Bangalore has a handful of freshwater lakes and water tanks, the largest Madivala tank, Hebbal lake, Ulsoor lake and Sankey Tank. Groundwater occurs in silty to sandy layers of the alluvial sediments. The Peninsular Gneissic Complex (PGC) makes up the most dominant rock unit in the area and includes granites, gneisses and migmatites, while the soils of Bangalore consist of red laterite and red, fine loamy to clayey soils. Large deciduous canopy and minority coconut trees make up most of the city's vegetation. Though Bangalore has been classified as a part of the seismic zone II (a stable zone), it has experienced quakes of magnitude as high as 4.5.
Due to its high elevation, Bangalore usually enjoys salubrious climate throughout the year, although unexpected heat waves catch residents by surprise during the summer. Bangaloreans commonly refrain that summer has gotten progressively hotter over the years. That could be due to the loss of green cover in the city, increased urbanization and the resulting urban heat island effect, as well as possibly climate change. January, the coolest month, has an average low temperature of 15.1 °C and the hottest month, April, has an average high temperature of 33.6 °C. Winter temperatures rarely drop below 12 °C (54 °F), and summer temperatures seldom exceed 36–37 °C (100 °F). Bangalore receives rainfall from both the northeast and the southwest monsoons and September, October and August measure the wettest months, in that order. Fairly frequent thunderstorms, which occasionally cause power outages and local flooding, moderated the summer heat. The heaviest rainfall recorded in a 24-hour period is 180 mm (7 in) recorded on October 1, 1997.
|Bangalore City officials|
|Administrator||S. Dilip Rau|
|Municipal Commissioner||Dr. S. Subramanya|
|Police Commissioner||N. Achuta Rao|
The Bruhat Bengaluru Mahanagara Palike (BBMP, Greater Bangalore Municipal Corporation) directs the civic administration of the city. Greater Bangalore formed in 2007 by merging 100 wards of the erstwhile Bangalore Mahanagara Palike, with the neighboring seven City Municipal Councils (CMC), one Town Municipal Council and 110 villages around Bangalore.
A city council, comprised of elected representatives called "corporators," one from each of the wards (localities) of the city, runs Bruhat Bengaluru Mahanagara Palike. Popular elections once every five years elect the council members. The people elect a mayor and commissioner of the council through a quota system from a Scheduled Castes and Tribes candidate or an Other Backward Class female candidate. Members contesting elections to the council represent one of more of the state's political parties. Elections to the newly-created body have been placed on hold due to delays in delimitation of wards and finalizing voter lists. 150 wards, up from the 100 wards of the old Bangalore Mahanagara Palike, participate.
Bangalore's rapid growth has created traffic congestion and infrastructural obsolescence problems that the Bangalore Mahanagara Palike have found challenging to address. A 2003 Battelle Environmental Evaluation System (BEES) evaluation of Bangalore's physical, biological and socioeconomic parameters indicated that Bangalore's water quality and terrestrial and aquatic ecosystems measure close to ideal, while the city's socioeconomic parameters (traffic, quality of life) scored poorly. The BMP has been criticized by the Karnataka High Court, citizens and corporations for failing to effectively address the crumbling road and traffic infrastructure of Bangalore. The unplanned nature of growth in the city resulted in massive traffic gridlocks that the municipality attempted to ease by constructing a flyover system and by imposing one-way traffic systems.
Some of the flyovers and one-ways mitigated the traffic situation moderately but proved unable to adequately address the disproportionate growth of city traffic. In 2005 both the Central Government and the State Government allocated considerable portions of their annual budgets to address Bangalore's infrastructure. The Bangalore Mahanagara Palike works with the Bangalore Development Authority (BDA) and the Bangalore Agenda Task Force (BATF) to design and implement civic projects. Bangalore generates about 3,000 tons of solid waste per day, with about 1,139 tons collected and sent to composting units such as the Karnataka Composting Development Corporation. The municipality dumps the remaining collected solid waste in open spaces or on roadsides outside the city.
A Police Commissioner, an officer with the Indian Police Service (IPS), heads the Bangalore City Police (BCP). The BCP has six geographic zones, including the Traffic Police, the City Armed Reserve, the Central Crime Branch and the City Crime Record Bureau and runs 86 police stations, including two all-women police stations. As capital of the state of Karnataka, Bangalore houses important state government facilities such as the Karnataka High Court, the Vidhana Soudha (the home of the Karnataka state legislature) and Raj Bhavan (the residence of the Governor of Karnataka). Bangalore contributes two members to India's lower house of parliament, the Lok Sabha, and 24 members to the Karnataka State Assembly. In 2007, the Delimitation Commission of India reorganized the constituencies based on the 2001 census, and thus the number of Assembly and Parliamentary constituencies in Bangalore has been increased to 28 and three respectively. Those changes will take effect from the next elections. The Karnataka Power Transmission Corporation Limited (KPTCL) regulates electricity in Bangalore. Like many cities in India, Bangalore experiences scheduled power cuts, especially over the summer, to allow electricity providers to meet the consumption demands of households as well as corporations.
Bangalore's Rs. 260,260 crore (USD 60.5 billion) economy (2002–03 Net District Income) makes it a major economic center in India. Indeed, Bangalore ranks as India's fourth largest and fastest growing market. Bangalore's per capita income of Rs. .49,000 (US$ 1,160) ranks the highest for any Indian city. The city stands as the third-largest hub for high net worth individuals (HNWI / HNIs), after Mumbai and Delhi. Over 10,000 individual dollar millionaires and around 60,000 super-rich people who have an investable surplus of Rs. 4.5 crore and Rs. 50 lakh respectively live in Bangalore. As of 2001, Bangalore's share of Rs. 1660 crore (US$ 3.7 billion) in Foreign Direct Investment ranked the third highest for an Indian city. In the 1940s industrial visionaries such as Sir Mirza Ismail and Sir Mokshagundam Visvesvaraya played an important role in the development of Bangalore's strong manufacturing and industrial base. Bangalore serves as headquarters to several public manufacturing heavy industries such as Hindustan Aeronautics Limited (HAL), National Aerospace Laboratories (NAL), Bharat Heavy Electricals Limited (BHEL), Bharat Electronics Limited, Bharat Earth Movers Limited (BEML) and Hindustan Machine Tools (HMT). In June 1972 the Indian government established the Indian Space Research Organisation (ISRO) under the Department of Space and headquartered in the city. Bangalore has earned the title "Silicon Valley of India" because of the large number of Information Technology companies located in the city which contributed 33 percent of India's Rs. 144,214 crore (US$ 32 billion) IT exports in 2006-07.
Bangalore's IT industry divides into three main "clusters" — Software Technology Parks of India, Bangalore (STPI); International Technology Park Bangalore (ITPB), formerly International Technology Park Ltd. (ITPL); and Electronics City. Infosys and Wipro, India's second and third largest software companies, have their largest campus in Electronics City. As headquarters to many of the global SEI-CMM Level 5 Companies, Bangalore holds a prominent place on the global IT map. The growth of Information Technology has presented the city with unique challenges. Ideological clashes sometimes occur between the city's IT moguls, who demand an improvement in the city's infrastructure and the state government, whose electoral base rests primarily the people in rural Karnataka. Bangalore serves as a hub for biotechnology related industry in India and in the year 2005, around 47% of the 265 biotechnology companies in India had headquarters located there; including Biocon, India's largest biotechnology company.
Bangalore's HAL Airport (IATA code: BLR) ranks as India's fourth busiest and functions as both domestic and international airport, connecting well to several destinations in the world. Unlike most airports in the country, controlled by the Airports Authority of India, the Hindustan Aeronautics Limited owns and operates this airport, and also uses it to test and develop fighter aircraft for the Indian Air Force. With the liberalization of India's economic policies, many domestic carriers such as SpiceJet, Kingfisher Airlines, Jet Airways and Go Air have started servicing the city, leading to congestion problems at this airport. Aviation experts expect the situation to ease when the new Bangalore International Airport, presently under construction in Devanahalli in the outskirts of Bangalore, becomes operational. Currently targeted for inauguration in April 2008, this airport will have two runways with a capacity to handle 11 million passengers per year. Air Deccan and Kingfisher Airlines have their headquarters in Bangalore. The Indian Railways connects Bangalore well to the rest of the country. The Rajdhani Express connects Bangalore to New Delhi, the capital of India, Mumbai, Chennai, Kolkata, and Hyderabad, as well as other major cities in Karnataka. An intra-city rapid rail transport system called the Namma Metro has been in development, expecting to be operational in 2011. Once completed, that will encompass a 33 km (20.5 mi) elevated and underground rail network, with 32 stations in Phase I and more being added in Phase II. Three-wheeled, black and yellow auto-rickshaws, referred to as autos, represent a popular form of transport. Metered, they accommodate up to three passengers. Several operators commonly referred to Citi taxis provide taxi service within Bangalore, taking up to four passengers. Usually metered, the Citi taxis charge higher fares than auto-rickshaws.
Buses operated by Bangalore Metropolitan Transport Corporation (BMTC) represent the only means of public transport available in the city. While commuters can buy tickets on boarding those buses, BMTC also provides an option of a bus pass to frequent users. BMTC runs air-conditioned red-colored Volvo buses on major routes.
With an estimated population of 5,281,927 in the year 2007, Bangalore ranks the third most populous city in India and the 27th most populous city in the world. With a decadal growth rate of 38 percent, Bangalore represented the fastest-growing Indian metropolis after New Delhi for the decade 1991–2001. Residents of Bangalore refer to themselves as Bangaloreans in English or Bengaloorinavaru in Kannada. While Kannadigas make up the majority of the population, the cosmopolitan nature of the city has caused people from other states of India to migrate to Bangalore and settle there. Scheduled Castes and Tribes account for 14.3 percent of the city's population. People widely speak Kannada, the official language of the state of Karnataka, in Bangalore.
According to the 2001 census of India, 79.37 percent of Bangalore's population professes Hinduism, roughly the same as the national average. Muslims comprise 13.37 percent of the population, again roughly the same as the national average, while Christians and Jains account for 5.79 percent and 1.05 percent of the population, respectively, double that of their national averages. Women make up 47.5 percent of Bangalore's population. Bangalore has the second highest literacy rate (83 percent) for an Indian metropolis, after Mumbai. Roughly 10 percent of Bangalore's population lives in slums — a relatively low proportion when compared to other cities in the developing world such as Mumbai (42 percent) and Nairobi (60 percent). The 2004 National Crime Records Bureau statistics indicate that Bangalore accounts for 9.2 percent of the total crimes reported from 35 major cities in India. Delhi and Mumbai accounted for 15.7 percent and 9.5 percent respectively.
Bangalore has been nicknamed the "Garden City of India" because of its greenery and the presence of many public parks, including the Lal Bagh and Cubbon Park. Dasara, a traditional celebratory hallmark of the old Kingdom of Mysore, constitutes a state festival celebrated with great vigor. Deepavali, the "Festival of Lights," transcends demographic and religious lines and represents another important festival. Other traditional Indian festivals such as Ganesh Chaturthi, Ugadi, Sankranthi, Eid ul-Fitr, and Christmas enjoy wide participation. Kannada film industry locates their main studios in Bangalore, producing many Kannada movies each year.
The diversity of cuisine available reflects of the social and economic diversity of Bangalore. Roadside vendors, tea stalls, and South Indian, North Indian, Chinese and Western fast food enjoy wide popularity in the city. Udupi restaurants prove immensely popular and serve predominantly vegetarian, regional cuisine.
Bangalore has become a major center of Indian classical music and dance. Classical music and dance recitals enjoy heavy attendance throughout the year, particularly during the Ramanavami and Ganesha Chaturthi festivals. The Bengaluru Gayana Samaja has been at the forefront of promoting classical music and dance in the city. The city also has a vibrant Kannada theater scene with organizations like Ranga Shankara and Benaka leading the way. Some of India's top names in theater like the late B. V. Karanth, Girish Karnad and others have called the city home.
Bangalore hosts an active presence of Rock and other forms of western music. Bands like Iron Maiden, Aerosmith, Scorpions, Roger Waters, Uriah Heep, Jethro Tull, Joe Satriani, INXS, No Doubt, Safri Duo, Black Eyed Peas, Deep Purple, Mark Knopfler, The Rolling Stones, and Bryan Adams, among others, have performed in the city. Bangalore has earned the title "Pub Capital of India".
Cricket represents one of the most popular sports in Bangalore. A significant number of national cricketers have come from Bangalore, including former Indian cricket team captain Rahul Dravid. Other cricketing greats from Bangalore include Gundappa Vishwanath, Anil Kumble, E.A.S. Prasanna, Venkatesh Prasad, Bhagwat Chandrasekhar, Syed Kirmani and Roger Binny. Many children play gully cricket on the roads and in the city's many public fields. Bangalore's main international cricket stadium, M. Chinnaswamy Stadium, hosted its first match in 1974. Bangalore has a number of elite clubs, like the Bangalore Golf Club, the Bowring Institute and the exclusive Bangalore Club, which counts among its previous members Winston Churchill and the Maharaja of Mysore.
Until the early nineteenth century, most schools in Bangalore had been founded by religious leaders for pupils from their religions. The western system of education came into vogue during the rule of Mummadi Krishnaraja Wodeyar when two schools established in Bangalore. The Wesleyan Mission followed in 1851 and the Bangalore High School, started by the Government, began in 1858.
In the present day, schools for young children in Bangalore take the form of kindergarten education. Schools affiliated with boards of education like the Karnataka state board, ICSE, CBSE, National Open School (NOS), IGCSE and IB offer primary and secondary education in Bangalore. Three kinds of schools operate in Bangalore viz. government (run by the government), aided (the government provides financial aid) and un-aided private (without financial aid). After completing their secondary education, students typically enroll in Junior College (also known as Pre-University) in one of three streams — Arts, Commerce or Science. Upon completing the required coursework, students enroll in general or professional degrees.
Bangalore University, established in 1964, has its campus in Bangalore. Around 500 colleges, with a total student enrollment of 300,000, affiliate to the university. The university has two campuses within Bangalore; Jnanabharathi and Central College. Indian Institute of Science, Bangalore, established in 1909, stands as the premier institute for scientific research and study in India. National Law School of India University (NLSIU), one of the most sought after law colleges in India, and the Indian Institute of Management, Bangalore, one of the premier management schools in India, have campuses in Bangalore.
The first printing-press set up in Bangalore in 1840. In 1859, Bangalore Herald became the first English bi-weekly newspaper published in Bangalore and in 1860, Mysore Vrittanta Bodhini became the first Kannada newspaper circulated in Bangalore. Currently, Vijaya Karnataka and The Times of India represent the most widely circulated Kannada and English newspapers in Bangalore respectively.
Bangalore got its first radio station when All India Radio, the official broadcaster for the Indian Government, started broadcasting from its Bangalore station on November 2, 1955. The radio station transmitted in AM till in 2001. Radio City became the first private channel in India to transmit FM radio from Bangalore. In recent years, a number of FM channels have begun broadcasting from Bangalore. The city also has various clubs for HAM radio enthusiasts.
Bangalore received its first television transmission November 1, 1981 when Doordarshan established a relay center. Doordarshan established a production center in its Bangalore office in 1983, introducing a news program in Kannada on November 19, 1983. Doordarshan also launched a Kannada satellite channel on August 15, 1991, now christened DD Chandana. The advent of private satellite channels in Bangalore started in September 1991 when Star TV inaugurated broadcast. Though the number of satellite TV channels available for viewing in Bangalore has grown over the years, the cable operators play a major role in the availability of those channels, leading to occasional conflicts. Direct To Home services may be purchased in Bangalore now. Internet services inaugurated in Bangalore in early 1990s with the first internet service provider STPI offering access. They only provided internet service to corporates. VSNL offered dial-up internet services to the general public at the end of 1995. Currently, Bangalore has the largest number of broadband internet connections in India.
All links retrieved May 11, 2016.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia: |
|Titanium machine screws.|
Bored silly with describing titanium samples, I used to have a diversion here describing the difference between a bolt and a machine screw. Well, it turns out my description, based on the type of head and the portion of the shaft that is threaded, was completely and utterly wrong. Alert reader David Cook was kind enough to set me straight quite definitively:
No. Actually, the difference between a bolt and screw is based on its application, not appearance. A bolt is used with a nut to produce a clamping force to hold materials together; whereas a screw interlocks threads with the material itself (no nut on the end).So, now you know, and now I know.
Screws usually have threads all the way up to provide the maximum possible thread-to-thread contact area with the material for maximum holding power. Bolts usually have only enough threads at the tip to attach a nut. But, some bolts also have threads all the way up so that one or more nuts can be installed anywhere.
A screw is perfectly adequate against smaller forces. Humans and machines can easily install screws (no fumbling with a nut). Without a nut, that's one less part to stock and pay for, and one less part to fall off or become lost. Given the large number used, the savings in cost and efficiency make screws an effective solution.
However, against greater forces, the threads in the material are likely to fail and the screw would rip out. Thicker and stronger material would be required to produce strong-enough threads in the material to resist these greater forces. But that would be heavier and more expensive. Instead, a thick, strong nut can be installed to rely on the nut's threads instead of the material's threads. With a nut and a bolt you can use relatively weaker and thinner (usually lighter and less expensive) material.
Your observation that a bolt usually has a hex head or other external-drive is simply because that type of head allows greater force to be applied, which is necessary to achieve the purpose of a bolt. On the other hand, the head of a screw (slotted, Phillips, internal drive) is designed for convenience of installing and removing, rather than great forces. In fact, most screw heads are purposely designed to be torque limiting (the tool slips out) to prevent over-tightening.
In summary, although you may be able to guess at the manufacturer's intended application based on the appearance of the fastener, a bolt or screw is not defined by its appearance, but in how it is used. You can use most screws as bolts, simply by adding nut on the end, and most bolts as screws, simply by tapping (adding threads to) the material.
The nearest analogy I can think of is to select a knife and ask "Is it a kitchen utensil or a weapon?" The manufacturer probably had a particular purpose in mind, and there are certain visible features that would suggest a particular knife is better suited for one application over the other, but what you call it depends on how it is being used.
Source: eBay seller e3134
Contributor: Theodore Gray
Acquired: 30 November, 2003
Text Updated: 16 March, 2009 |
What is this platform? Why are co-ops important?
In September 2015, UN Member States adopted the 2030 Agenda for Sustainable Development which comprises seventeen Sustainable Development Goals (SDGs) that aim to take forward the work begun in 2000 by the Millennium Development Goals. This ambitious agenda sets a course to end poverty, protect the planet and ensure prosperity for all by 2030.
Co-ops for 2030 is a campaign for cooperatives to learn more about the SDGs, commit to pledges to contribute to achieving the SDGs (often through initiatives that are already in place) and report their progress.
The Agenda explicitly recognises co-operative enterprises as important players within the private sector to achieve the SDGs, creating an opportunity for co-operatives to position themselves as partners with global, national, regional and local institutions to achieve sustainable development.
The co-operative model of business is based on ethics, values and principles that put the needs and aspirations of their members above the simple goal of maximising profit. Through self-help and empowerment, reinvesting in their communities and concern for the well-being of people and the world in which we live, co-operatives nurture a long-term vision for sustainable economic growth, social development and environmental responsibility.
Co-operatives are currently in the second phase of implementing the Blueprint for a Co-operative Decade, a global strategy for the co-operative business model to be by 2020 the acknowledged leader in economic, social and environmental sustainability, the model preferred by people and the fastest growing form of enterprise.
Given the synergies between the UN’s vision for a sustainable future and that of the co-operative movement’s, it is clear that co-operatives can contribute to the achievement of the SDGs. In order to best do this, co-operatives need to align their work with the SDGs, and the targets and indicators that will track achievement of the Goals leading up to 2030.
The International Co-operative Alliance, as the global voice of the movement, is committed to educating co-operatives about the SDGs, helping co-operative enterprises respond to the UN’s call to action and collecting information about co-operative contributions to the 2030 Agenda, in order to better position co-operatives as partners throughout the implementation process. All of these activities will in turn further the aims of the Blueprint strategy. |
Explore foam’s economic impact.
The foam industry provides thousands of jobs throughout the country and saves consumers millions of dollars every year.
Businesses and government facilities across the state rely on foam. Alternative products don’t insulate nearly as well, they cost more, and in many cases, they generate more waste, while increasing air and water emissions over their life cycles.
Foam Across Industries
A typical foam tray costs significantly less than a compostable tray – saving some school districts over a million dollars per year in material cost.
Most hospitals use polystyrene foam products to minimize exposure to bacteria and other foodborne pathogens.
The PS foam manufacturing industry is dedicated to keeping costs low, increasing efficiency and excellence, and delivering solutions for economic growth.
Many restaurants and food trucks rely on the convenience, insulation properties, and high level of sanitation afforded by foam food packaging in providing healthy food to their customers.
Many restaurants that use foam are considered small businesses. From an economic perspective, foam is often the most cost effective choice for small business owners.
Because of its lightweight structure, polystyrene foam is a preferred protective packaging for use in shipping valuable items.
June 26, 2017
Polystyrene Recycling: What You Need to Know
As some states, cities, and communities debate expanded polystyrene (EPS) foam bans, misconceptions and false information inevitably arise. Foam opponents often argue that foam is
September 4, 2015
Foam Recycling in Washington Appeals to Environmentalists
Washington state environmentalists have a new reason to rejoice: foam recycling has taken hold in their state as innovative recycling centers model new ways to |
A dummy activity is a simulated activity of sorts, one that is of a zero duration and is created for the sole purpose of demonstrating a specific relationship and path of action on the arrow diagramming method. Dummy activities are a useful tool to implement when the specific logical relationship between two particular activities on the arrow diagramming method cannot specifically be linked or conceptualized through simple use of arrows going from one activity to another. In this case, the creation of a dummy activity, which serves essentially as a form of a placeholder, can provide exceedingly valuable. Dummy activities should in no cases be allocated any duration of time in the planning and/or scheduling or project activities and components. When they are illustrated in a graphical format, dummy activities should be represented by the user of a dashed line with an arrow head on one end, and may in some cases be represented by a unique color.
This term is defined in the 3rd edition of the PMBOK but not in the 4th. |
“Communities in Appalachia and Wales can play a really important part in the movement for rethinking what we mean by economic development.” — John Gaventa
John Gaventa started a video exchange between coal miners in Appalachia and Wales in 1974. The After Coal project owes a huge debt to his groundbreaking work. Gaventa is currently director of the Coady International Institute at St. Francis Xavier University in Antigonish, Nova Scotia.
We were lucky to catch him in Virginia last week and record an interview for After Coal. Here is a glimpse at his thoughts:
“Change is an inevitable part of mining communities. This doesn’t mean that we give up on these communities. It means we need to think about using local strengths to create a different kind of sustainable economy. The question is not: Coal versus no coal. The question is: How do we use community assets such as leadership, experience, resilience, and skills to create something different.” |
CIM - Flexible Manufacturing System
A Flexible Manufacturing System (FMS) is a configuration of computer-controlled, semiindependent workstations where materials are automatically handled and machine loaded. An FMS is a type of flexible automation system that builds on the programmable automation of NC and CNC machines. Programs and tooling setups can be changed with almost no loss of production time for moving from production of one product to the next. Such systems require a large initial investment but little direct labor to operate.
- Computer Integrated Manufacturing - CIM
Computer-integrated manufacturing (CIM) is an umbrella term for the total integration of product design and engineering, process planning, and manufacturing by means of complex computer systems.
Flexible Manufacturing System (FMS) is one of the tools used in Computer-Integrated Manufacturing or CIM.
Please click the link to read more about Computer-Integrated Manufacturing.
An FMS system has three key components:
1. several computer-controlled workstations, such as CNC machines or robots, that perform a series of operations
2. a computer-controlled transport system for moving materials and parts from one machine to another and in and out of the system
3. loading and unloading stations
Workers bring raw materials for a part family to the loading points, where the FMS takes over. Computer-controlled transporters deliver the materials to various workstations where they pass through a specific sequence of operations unique to each part. The route is determined by the central computer. The goal of using FMS systems is to synchronize activities and maximize the system’s utilization. Because automation makes it possible to switch tools quickly, setup times for machines are short. This flexibility often allows one machine to perform an operation when another is down for maintenance and avoids bottlenecks by routing parts to another machine when one is busy.
Figure K.2 shows the layout of a typical FMS, which produces turning and machining centers.1 Specific characteristics of this FMS include the following:
❐ The computer control room (right) houses the main computer, which controls the transporter and sequence of operations.
❐ Three CNC machines, each with its own microprocessor, control the details of the machining process.
❐ Two AGVs, which travel around a 200-foot-long oval track, move materials on pallets to and from the CNCs. When the AGVs’ batteries run low, the central computer directs them to certain spots on the track for recharging.
❐ Indexing tables lie between each CNC and the track. Inbound pallets from an AGV are automatically transferred to the right side of the table, and out-bound pallets holding finished parts are transferred to the left side for pickup.
❐ A tool changer located behind each CNC loads and unloads tool magazines. Each magazine holds an assortment of tools. A machine automatically selects tools for the next specific operation. Changing from one tool to another takes only 2 minutes.
❐ Two load and unload stations are manually loaded by workers; loading takes 10 to 20 minutes.
❐ An automatic AS/RS (upper right) stores finished parts. The AGV transfers parts on its pallet to an indexing table, which then transfers them to the AS/RS. The process is reversed when parts are needed for assembly into finished products elsewhere in the plant.
This particular system fits processes involving medium-level variety (5 to 100 parts) and volume (annual production rates of 40 to 2,000 units per part). The system can simultaneously handle small batches of many products. In addition, an FMS can be used a second way: At any given time, an FMS can produce low-variety, high-volume products in much the same way that fixed manufacturing systems do. However, when these products reach the end of their life cycles, the FMS can be reprogrammed to accommodate a different product. This flexibility makes FMS very appealing, especially to operations where life cycles are short.
Since the first FMS was introduced in the mid-1960s, the number installed worldwide has grown to almost 500, with about half of them either in Japan or the United States and the other half in Europe. A much more popular version of flexible automation is the flexible manufacturing cell (FMC), which is a scaled-down version of FMS that consists of one or a very small group of NC machines that may or may not be linked to a materials handling mechanism. The FMC doesn’t have a materials handling system controlled by a computer, which moves parts to the appropriate machines, as does the more sophisticated FMS.
- CIM - Computer the Aided Design and Manufacturing
Computer-aided design (CAD) is an electronic system for designing new parts or products or altering existing ones, replacing drafting traditionally done by hand. The component of CIM that deals directly with manufacturing operations is called compute
- CIM - Numerically controlled the machines & Industrial Robots
Numerically controlled (NC) machines are large machine tools programmed to produce small- to medium-sized batches of intricate parts. Industrial robots are versatile, computer-controlled machines programmed to perform various tasks.
- CIM - Automated Materials the Handling
Materials handling covers the processes of moving, packaging, and storing a product. Moving, handling, and storing materials cost time and money but add no value to the product. |
What Is the Difference Between Formal and Informal Working?
There’s a general understanding that the word “work” refers to any job or task that people perform in exchange for money or for some other type of benefit. However, economists have identified two common types of work: formal and informal. Although both types of work involve jobs and tasks in exchange for money or benefits, there are some differences between the two involving things such as contracts, compensation, and job security.
The Elements of Formal Work
Formal work refers to work in which a company hires an employee under an established working agreement that includes, salary or wages, health benefits, and defined work hours and workdays. In most instances, employees don’t work under a signed contract, but rather work under the agreement reached when the employer offered the job to the employee. This agreement remains in force until the employer makes a change and informs an employee about those changes. Employees in a formal work agreement are often given an annual performance evaluation and are eligible for salary increases and promotions based on their performance.
The Elements of Informal Work
Informal work refers to work in which an employer hires an employee without an established working agreement. With informal work, employees don’t receive health benefits and are often hired temporarily. Their work hours are not guaranteed, which means that in one week they may work 30 hours, and the following week they may work only 10 hours. Informal workers are treated like contractors, and often bounce from one job to another. In most instances, informal workers are paid in cash, but if they are paid by check, no taxes are deducted from their salary.
Differences Between Formal and Informal Work
One primary difference between formal work and informal work is that formal work is far more stable than informal work. The reason for this is that companies invest time, training, and education in formal work employees, so that they can gain new skills that will benefit the business. Companies that provide informal work are seeking temporary employees to perform short-term tasks, typically seasonal work, which will end in a few weeks or months.
Another major difference is that formal work typically pays higher wages than informal work. The reason is that formal work tends to require a higher level of education or training than informal work. For example, a computer programmer is a type of formal work that requires a specific set of skills. In contrast, a person hired to haul old computers to a recycling dump is performing informal work that doesn’t require any type of specialized training. As a result, formal workers typically earn higher salaries and wages than informal workers.
Formal and informal work is also different when it comes to taxes. Formal workers are taxed under the existing tax guidelines and receive paychecks that reflect these taxes. Informal workers are not taxed and are responsible for paying their own taxes. As a result, a country that relies mostly on informal work may not receive all the taxes owed under the law, since there may be millions of workers that choose not to report their incomes and pay taxes on that income.
- Federal Reserve Bank of St. Louis: What is the Informal Labor Market?
- MasterCard: In Emerging Economies, Is There a Role for the Informal Sector?
- Funds for NGOs: Specific Characteristics of the Formal Economy and Informal Economy
- The World Bank: Informality and Formality – Two End of the Employment Continuum |
Solar energy can be used in a variety of ways in order to benefit our homes. With the right choice of home solar energy system, you’ll be able to generate a source of clean and renewable electricity or a source of hot water for your home.
In this article we take a look at the different solar energy systems available for your home.
Solar Power Systems
Solar power systems for the home are becoming increasingly popular in many countries across the globe as homeowners look for ways to reduce their energy bills whilst being kind to the environment.
Although solar power systems are often costly to install, they have the potential to not only return on their investment, but to save you significant amounts of money in future years. Although this all depends on a variety of factors including location, many homeowners have been able to save significant amounts of money over the lifetime of their installations. As the efficiency of solar power technologies increases year on year, so does the possibility of being able to make money from a home solar power installation.
Some homeowners have the ability to sell any excess electricity they produce back to the grid. This can help smaller household’s that have a high capacity solar power system sell on large quantities of electricity that would otherwise go unused.
Solar Hot Water Systems
In more recent years, some homeowners have been opting to install solar hot water systems to their homes. This can save a household significant amounts of money through reduced energy bills associated with the heating of their hot water.
A solar hot water system has a similar appearance to that of a solar power system however it often requires additional plumbing work and the installation of a storage tank to the inside of the home. For this reason, most homeowners opt for a solar power system instead.
Solar hot water systems are unable to sell their produce back to the grid unlike a solar power system. Although this may be seen as a disadvantage, hot water generated by solar energy can be stored in tanks for use at a later time, thus helping to reduce wastage.
Some homeowners have seen positive results from the installation both a solar power and solar hot water system to their home. This provides the best of both worlds, helping to almost eliminate the need for external energy sources.
Should you wish to look into the options available to you and your home, you should consult a reputable renewable energy systems installer in your area for advice. |
Underground coal gasification
|Industrial sector(s)||oil and gas industry|
|Main facilities||Angren Power Station (Uzbekistan)|
Majuba Power Station (South Africa)
Chinchilla Demonstration Facility (Australia)
|Inventor||Carl Wilhelm Siemens|
|Year of invention||1868|
|Developer(s)||African Carbon Energy|
Ergo Exergy Technologies
Skochinsky Institute of Mining
Underground coal gasification (UCG) is an industrial process which converts coal into product gas. UCG is an in-situ gasification process carried out in non-mined coal seams using injection of oxidants, and bringing the product gas to surface through production wells drilled from the surface.
The predominant product gases are methane, hydrogen, carbon monoxide and carbon dioxide. Ratios vary depending upon formation pressure, depth of coal and oxidant balance. Gas output may be combusted for electricity production. Alternatively gas can be used to produce synthetic natural gas or hydrogen and carbon monoxide can be used as a chemical feedstock for the production of fuels (e.g. diesel), fertilizer, explosives and other products. The technique can be applied to coal resources that are otherwise unprofitable or technically complicated to extract by traditional mining methods. UCG offers an alternative to conventional coal mining methods for some resources. It has been linked to a number of concerns from environmental campaigners.
The earliest recorded mention of the idea of underground coal gasification was in 1868, when Sir William Siemens in his address to the Chemical Society of London suggested the underground gasification of waste and slack coal in the mine. Russian chemist Dmitri Mendeleyev further developed Siemens' idea over the next couple of decades.
In 1909–1910, American, Canadian, and British patents were granted to American engineer Anson G. Betts for "a method of using unmined coal". The first experimental work on UCG was planned to start in 1912 in Durham, the United Kingdom, under the leadership of Nobel Prize winner Sir William Ramsay. However, he was unable to commence the UCG field work before the beginning of the World War I, and the project was abandoned.
In 1913 Ramsay's work was noticed by Russian exile Vladimir Lenin who wrote in the newspaper Pravda an article "Great Victory of Technology" promising to liberate workers from the hazardous work in the mines by underground coal gasification. Between 1928 and 1939, underground tests were conducted in the Soviet Union by the state-owned organization Podzemgaz. The first test using the chamber method started on 3 March 1933 in the Moscow coal basin at Krutova mine. This test and several following tests failed. The first successful test was conducted on 24 April 1934 in Lysychansk, Donetsk Basin by the Donetsk Institute of Coal Chemistry.
The first pilot-scale process started 8 February 1935 in Horlivka, Donetsk Basin. Production gradually increased, and, in 1937–1938, the local chemical plant began using the produced gas. In 1940, experimental plants were built in Lysychansk and Tula. After World War II, the Soviet activities culminated in the operation of five industrial-scale UCG plants in the early 1960s. However, Soviet activities subsequently declined due to the discovery of extensive natural gas resources. In 1964, the Soviet program was downgraded. As of 2004[update] only Angren site in Uzbekistan and Yuzhno-Abinsk site in Russia continued operations.
After World War II, the shortage in energy and the diffusion of the Soviets' results provoked new interest in Western Europe and the United States. In the United States, tests were conducted in 1947–1960 in Gorgas, Alabama. From 1973–1989, an extensive test was carried out. The United States Department of Energy and several large oil and gas companies conducted several tests. Lawrence Livermore National Laboratory conducted three tests in 1976–1979 at the Hoe Creek test site in Campbell County, Wyoming.
In cooperation with Sandia National Laboratories and Radian Corporation, Livermore conducted experiments in 1981–1982 at the WIDCO Mine near Centralia, Washington. In 1979–1981, an underground gasification of steeply dipping seams was demonstrated near Rawlins, Wyoming. The program culminated in the Rocky Mountain trial in 1986–1988 near Hanna, Wyoming.
In Europe, the stream method was tested at Bois-la-Dame, Belgium, in 1948 and in Jerada, Morocco, in 1949. The borehole method was tested at Newman Spinney and Bayton, United Kingdom, in 1949–1950. A few years later, a first attempt was made to develop a commercial pilot plan, the P5 Trial, at Newman Spinney in 1958–1959. During the 1960s, European work stopped, due to an abundance of energy and low oil prices, but recommenced in the 1980s. Field tests were conducted in 1981 at Bruay-en-Artois and in 1983–1984 at La Haute Deule, France, in 1982–1985 at Thulin, Belgium, and in 1992–1999 the El Tremedal site, Province of Teruel, Spain. In 1988, the Commission of the European Communities and six European countries formed a European Working Group.
In New Zealand, a small scale trial was operated in 1994 in the Huntly Coal Basin. In Australia, tests were conducted starting in 1999. China has operated the largest program since the late 1980s, including 16 trials.
Underground coal gasification converts coal to gas while still in the coal seam (in-situ). Gas is produced and extracted through wells drilled into the unmined coal seam. Injection wells are used to supply the oxidants (air, oxygen) and steam to ignite and fuel the underground combustion process. Separate production wells are used to bring the product gas to the surface. The high pressure combustion is conducted at temperature of 700–900 °C (1,290–1,650 °F), but it may reach up to 1,500 °C (2,730 °F).
The process decomposes coal and generates carbon dioxide (CO
2), hydrogen (H
2), carbon monoxide (CO), methane (CH
4). In addition, there are small quantities of various contaminants including sulfur oxides (SO
x), mono-nitrogen oxides (NO
x), and hydrogen sulfide (H
2S). As the coal face burns and the immediate area is depleted, the oxidants injected are controlled by the operator.
There are a variety of designs for underground coal gasification, all of which are designed to provide a means of injecting oxidant and possibly steam into the reaction zone, and also to provide a path for production gases to flow in a controlled manner to the surface. As coal varies considerably in its resistance to flow, depending on its age, composition and geological history, the natural permeability of the coal to transport the gas is generally not adequate. For high pressure break-up of the coal, hydro-fracturing, electric-linkage, and reverse combustion may be used in varying degrees.
The simplest design uses two vertical wells: one injection and one production. Sometimes it is necessary to establish communication between the two wells, and a common method is to use reverse combustion to open internal pathways in the coal. Another alternative is to drill a lateral well connecting the two vertical wells. UCG with simple vertical wells, inclined wells, and long deflected wells was used in the Soviet Union. The Soviet UCG technology was further developed by Ergo Exergy and tested at Linc's Chinchilla site in 1999–2003, in Majuba UCG plant (2007), in Cougar Energy's failed UCG pilot in Australia (2010).
In the 1980s and 1990s, a method known as CRIP (controlled retraction and injection point) was developed (but not patented) by the Lawrence Livermore National Laboratory and demonstrated in the United States and Spain. This method uses a vertical production well and an extended lateral well drilled directionally in the coal. The lateral well is used for injection of oxidant and steam, and the injection point can be changed by retracting the injector.
Carbon Energy was the first to adopt a system which uses a pair of lateral wells in parallel. This system allows a consistent separation distance between the injection and production wells while progressively mining the coal between the two wells. This approach is intended to provide access to the greatest quantity of coal per well set and also allows greater consistency in production gas quality.
A new technology has been announced in May 2012 by developer Portman Energy wherein a method called SWIFT (Single Well Integrated Flow Tubing) uses a single vertical well for both Syngas recovery and oxidant delivery. The design has a single casing of tubing strings enclosed and filled with an inert gas to allow for leak monitoring, corrosion prevention and heat transfer. A series of horizontally drilled lateral oxidant delivery lines into the coal and a single or multiple syngas recovery pipeline(s) allow for a larger area of coal to be combusted at one time. The developers claim this method will increase the syngas production by up to ten (10) times prior design approaches and the single well design mean development costs are significantly lower and the facilities and wellheads are concentrated at a single point reducing surface access roads, pipelines and facilities footprint. The UK patent office have advised that the full patent application GB2501074 by Portman Energy be published 16 October 2013. <v/ref>
A wide variety of coals are amenable to the UCG process. Coal grades from lignite through to bituminous may be successfully gasified. A great many factors are taken into account in selecting appropriate locations for UCG, including surface conditions, hydrogeology, lithoglogy, coal quantity, and quality. According to Andrew Beath of CSIRO Exploration & Mining other important criteria includes:
- Depth of 100–600 metres (330–1,970 ft)
- Thickness more than 5 metres (16 ft)
- Ash content less than 60%
- Minimal discontinuities
- Isolation from valued aquifers.
According to Peter Sallans of Liberty Resources Limited these criteria are:
- Depth of 100–1,400 metres (330–4,590 ft)
- Thickness more than 3 metres (9.8 ft)
- Ash content less than 60%
- Minimal discontinuities
- Isolation from valued aquifers.
Underground coal gasification allows access to coal resources that are not economically recoverable by other technologies, e.g., seams that are too deep, low grade, or that have a thin stratum profile. By some estimates, UCG will increase economically recoverable reserves by 600 billion tonnes. Lawrence Livermore National Laboratory estimates that UCG could increase recoverable coal reserves in the USA by 300%. Livermore and Linc Energy claim that UCG capital and operating costs are lower than in traditional mining.
UCG product gas is used to fire combined cycle gas turbine (CCGT) power plants, with some studies suggesting power island efficiencies of up to 55%, with a combined UCG/CCGT process efficiency of up to 43%. CCGT power plants using UCG product gas instead of natural gas can achieve higher outputs than pulverized-coal-fired power stations (and associated upstream processes), resulting in a large decrease in greenhouse gas (GHG) emissions.
UCG product gas can also be used for:
- Synthesis of liquid fuels;
- Manufacture of chemicals, such as ammonia and fertilizers;
- Production of synthetic natural gas;
- Production of hydrogen.
Underground product gas is an alternative to natural gas and potentially offers cost savings by eliminating mining, transport, and solid waste. The expected cost savings could increase given higher coal prices driven by emissions trading, taxes, and other emissions reduction policies, e.g. the Australian Government's proposed Carbon Pollution Reduction Scheme.
Cougar Energy and Linc Energy conducted pilot projects in Queensland, Australia based on UCG technology provided by Ergo Exergy until they were banned in 2016. Yerostigaz, a subsidiary of Linc Energy, produces about 1 million cubic metres (35 million cubic feet) of syngas per day in Angren, Uzbekistan. The produced syngas is used as fuel in the Angren Power Station.
In South Africa, Eskom (with Ergo Exergy as technology provider) is operating a demonstration plant in preparation for supplying commercial quantities of syngas for commercial production of electricity. African Carbon Energy has received environmental approval for a 50 MW power station near Theunissen in the Free State province and is bid-ready to participate in the DOE's Independent Power Producer (IPP) gas program where UCG has been earmarked as a domestic gas supply option.
ENN has also operated a successful pilot project in China.
In addition, there are companies developing projects in Australia, UK, Hungary, Pakistan, Poland, Bulgaria, Canada, US, Chile, China, Indonesia, India, South Africa, Botswana, and other countries. According to the Zeus Development Corporation, more than 60 projects are in development around the world.
Eliminating mining eliminates mine safety issues. Compared to traditional coal mining and processing, the underground coal gasification eliminates surface damage and solid waste discharge, and reduces sulfur dioxide (SO
2) and nitrogen oxide (NO
x) emissions. For comparison, the ash content of UCG syngas is estimated to be approximately 10 mg/m3 compared to smoke from traditional coal burning where ash content may be up to 70 mg/m3. However, UCG operations cannot be controlled as precisely as surface gasifiers. Variables include the rate of water influx, the distribution of reactants in the gasification zone, and the growth rate of the cavity. These can only be estimated from temperature measurements, and analyzing product gas quality and quantity.
Subsidence is a common issue with all forms of extractive industry. While UCG leaves the ash behind in the cavity, the depth of the void left after UCG is typically more than other methods of coal extraction.
Underground combustion produces NO
x and SO
2 and lowers emissions, including acid rain.
Regarding emissions of atmospheric CO
2: Proponents of UCG have argued that the process has advantages for geologic carbon storage. Combining UCG with CCS (Carbon capture and storage) technology allows re-injecting some of the CO
2 on-site into the highly permeable rock created during the burning process, i.e. where the coal used to be. Contaminants, such as ammonia and hydrogen sulfide, can be removed from product gas at a relatively low cost.
However, as of late 2013, CCS had never been successfully implemented on a commercial scale as it was not within the scope of such projects and some had also resulted in environmental concerns. In Australia in 2014 the Government filed charges over alleged serious environmental harm stemming from Linc Energy's pilot Underground Coal Gasification plant near Chinchilla in the Queensland’s foodbowl of the Darling Downs. When UCG was banned in April, 2016 the Queensland Mines Minister Dr Anthony Lynham stated "The potential risks to Queensland's environment and our valuable agricultural industries far outweigh any potential economic benefits. UCG activity simply doesn't stack up for further use in Queensland."
Meanwhile, as an article in the Bulletin of Atomic Sciences pointed out in March 2010, UCG could result in massive carbon emissions. “If an additional 4 trillion tonnes [of coal] were extracted without the use of carbon capture or other mitigation technologies atmospheric carbon-dioxide levels could quadruple,” the article says, “resulting in a global mean temperature increase of between 5 and 10 degrees Celsius.”
Aquifer contamination is a potential environmental concern. Organic and often toxic materials (such as phenol) could remain in the underground chamber after gasification if the chamber is not decommissioned. Site decommissioning and rehabilitation are standard requirements in resources development approvals whether that be UCG, oil and gas, or mining, and decommissioning of UCG chambers is relatively straightforward. Phenol leachate is the most significant environmental hazard due to its high water solubility and high reactiveness to gasification. The US Dept of Energy's Lawrence Livermore Institute conducted an early UCG experiment at very shallow depth and without hydrostatic pressure at Hoe Creek, Wyoming. They did not decommission that site and testing showed contaminants (including the carcinogen benzene) in the chamber. The chamber was later flushed and the site successfully rehabilitated. Some research has shown that the persistence of minor quantities of these contaminants in groundwater is short-lived and that ground water recovers within two years. Even so, proper practice, supported by regulatory requirements, should be to flush and decommission each chamber and to rehabilitate UCG sites.
Newer UCG technologies and practices claim to address environmental concerns, such as issues related to groundwater contamination, by implementing the “Clean Cavern” concept. This is the process whereby the gasifier is self-cleaned via the steam produced during operation and also after decommissioning. Another important practice is maintaining the pressure of the underground gasifier below that of the surrounding groundwater. The pressure difference forces groundwater to flow continuously into the gasifier and no chemical from the gasifier can escape into the surrounding strata. The pressure is controlled by the operator using pressure valves at the surface.
- Coal Gas, www.clarke-energy.com, retrieved 12.12.2013
- , BBC - Coal gasification: The clean energy of the future?, retrieved 12.07.2014
- Siemens, C.W. (1868). "On the regenerative gas furnace as applied to the manufacture of cast steel". J. Chem. Soc. Chemical Society of London (21): 279–310.
- Burton, Elizabeth; Friedmann, Julio; Upadhye, Ravi (2007). Best Practices in Underground Coal Gasification (PDF) (Report). Lawrence Livermore National Laboratory. W-7405-Eng-48. Archived from the original (PDF) on 6 June 2010. Retrieved 3 January 2013.
- Klimenko, Alexander Y. (2009). "Early Ideas in Underground Coal Gasification and Their Evolution" (PDF). Energies. MDPI Publishing. 2 (2): 456–476. doi:10.3390/en20200456. ISSN 1996-1073.
- Lamb, George H. (1977). Underground coal gasification. Energy Technology Review № 14. Noyes Data Corp. p. 5. ISBN 978-0-8155-0670-6.
- Sury, Martin; et al. (November 2004). "Review of Environmental Issues of Underground Coal Gasification" (PDF). WS Atkins Consultants Ltd. Department of Trade and Industry. COAL R272 DTI/Pub URN 04/1880. Archived from the original (PDF) on 11 June 2007. Retrieved 18 July 2010.
- "Underground Coal Gasification. Current Developments (1990 to date)". UCG Engineering Ltd. Retrieved 24 November 2007.
- "How UCG Works". UCG Association. Retrieved 11 November 2007.
- Portman Energy (3 May 2012). UCG–the 3rd way. 7th Underground Coal Gasification Association (UCGA) Conference. London. Retrieved 1 October 2012.
- Morné Engelbrecht (2015). "Carbon Energy Delivers Innovations in Underground Coal Gasification". 3 (2). Cornerstone, The Official Journal of the World Coal Industry. pp. 61–64.
- Beath, Andrew (18 August 2006). "Underground Coal Gasification Resource Utilisation Efficiency" (PDF). CSIRO Exploration & Mining. Archived from the original (PDF) on 31 August 2007. Retrieved 11 November 2007.
- Sallans, Peter (23 June 2010). Choosing the Best Coals in the Best Locations for UCG. Advanced Coal Technologies Conference. Laramie: University of Wyoming.
- Copley, Christine (2007). "Coal". In Clarke, A. W.; Trinnaman, J. A. Survey of energy resources (PDF) (21st ed.). World Energy Council. p. 7. ISBN 0-946121-26-5. Archived from the original (PDF) on 9 April 2011.
- Walter, Katie (2007). "Fire in the Hole". Lawrence Livermore National Laboratory. Retrieved 6 October 2008.
- "Underground Coal Gasification". Linc Energy. Archived from the original on 16 May 2010. Retrieved 18 July 2010.
- "Cougar Energy Update on UCG Pilot Project at Kingaroy in Queensland". OilVoice. 27 April 2010. Retrieved 31 July 2010.
- "Cougar To Ramp Up UCG Process Down Under". Cougar Energy. Downstream Today. 16 March 2010. Retrieved 31 July 2010.
- "Linc pilot flows first GTL fuel". Upstream Online. NHST Media Group. 14 October 2008. Retrieved 6 August 2009. (Subscription required (help)).
- "Linc Energy Opens CTL Demo Plant". Downstream Today. 24 April 2009. Retrieved 6 August 2009.
- "Linc gears up for Chinchilla GTL". Upstream Online. NHST Media Group. 28 November 2007. Retrieved 6 August 2009. (Subscription required (help)).
- "UCG banned immediately in Queensland". ABC Online. Australian Broadcasting Corporation. 18 April 2016. Retrieved 21 April 2016.
- "Linc Energy Limited (ASX:LNC) Technology Update On Chinchilla Underground Coal Gasification (UCG) Operations". ABN Newswire. Asia Business News Ltd. 10 March 2009. Retrieved 8 August 2009.
- "ESKOM's underground coal gasification project" (PDF). European Commission. 5 May 2008. Retrieved 4 September 2011.[permanent dead link]
- Venter, Irma (12 February 2007). "Coal experts search for ways to cut emissions". Mining Weekly. Creamer Media. Retrieved 4 September 2011.
- Hannah, Jessica (12 August 2011). "Coal gasification demo plant design study under way". Mining Weekly. Creamer Media. Retrieved 4 September 2011.
- "Theunissen Project | Africary". www.africary.com. Retrieved 2016-12-12.
- "South African IPP Gas Program".
- Lazarenko, Sergey N.; Kochetkov, Valery N. (1997). "The underground coal gasification is the technology which answers o conditions of sustainable development of coal regions". In Strakos̆, Vladimír; Farana, R. Mine Planning and Equipment Selection 1997. Taylor & Francis. pp. 167–168. ISBN 978-90-5410-915-0.
- Shu-qin, L., Jun-hua, Y. (2002). Environmental Benefits of underground coal gasification. Journal of Environmental Sciences (China), vol. 12, no. 2, pp.284-288
- Krupp, Fred; Horn, Miriam (2009). Earth: The Sequel: The Race to Reinvent Energy and Stop Global Warming. New York: Norton & Company. ISBN 978-0-393-33419-7.
- National Research Council (U.S.). Committee on Ground-Water Resources in Relation to Coal Mining (1981). Coal mining and ground-water resources in the United States: a report. United States National Academies. p. 113.
- "Underground Coal Gasification: An Overview of an Emerging Coal Conversion Technology". 3 (2). Cornerstone, The Official Journal of the World Coal Industry. 2015. pp. 56–60.
"Beyond fracking", New Scientist feature article (Fred Pearce), 15 February 2014
- African Carbon Energy - 50 MW project
- Ergo Exergy Tech - global supplier of UCG technology
- UCG Association
- Energy & Environmental Research Centre (EERC) - UCG overview
- CO2SINUS CO2 Storage in in situ Converted Coal Seams - Research Project at the RWTH Aachen University. |
To grant money to Small Inventors who fill out the proper paperwork
This petition had 29 supporters
I am a small inventor, and the main problem that I, as well as countless other inventors, have run into is the lack of funds. As a young inventor, I need access to materials that are sometimes costly. While I am able to get many of the materials, the effect on my budget is greater than the reward. Also, as a college student, it is already hard to make money that won't be spent on textbooks, classes, etc.
This petition, if passed, will help aid small inventors by giving them money to aid their research, experiments, etc. The money granted can be in $1000 amounts, not to limit $100,000 per inventor. While this may seem like a lot, the national budget deals with $1,000,000,000,000s (trillions) of dollars a day.
The requirements for the inventor are as follows:
1) The invention may not be a Weapon of Mass Destruction, without specific permission from the U.S. government and military.
2) The inventor has to be a United States citizen for at least five years previous.
3) The inventor has to be at least 18 years of age, or in the case of permission from guardian, at least 13 years of age.
4) Of any money granted to the inventor, at least 75% has to go directly toward the advancement of research or experimentation.
Today: Kurt is counting on you
Kurt Baron needs your help with “The U.S. Government: To grant money to Small Inventors who fill out the proper paperwork”. Join Kurt and 28 supporters today. |
No one person – or company – can possibly know everything there is to know. That’s why we welcome your comments, suggestions and guest blog posts.
As much as we've all heard about carbon footprints, few of us know about water footprints. In addition to the regular water we associate with food and beverages, there is something called "virtual water." That's the water it actually takes to manufacture or grow something to the point where we use it, eat it, wear it or do something else with it. Check out the blog to read more.
Although a relative newcomer to the mineral/metal processing industry, fiberglass reinforced plastic (FRP) has been a material of construction for piping and process equipment for more than 50 years.
Some of the first installations of FRP piping were for handling petroleum effluents and byproducts. Once the material became more recognized, it began to be used for the construction of storage tanks, piping and gas ducts in the pulp and paper industry. Early resin systems were typically epoxies and polyesters. Today’s composite technology offers a range of advanced resin systems including vinyl esters and epoxy novolac vinyl esters, for a host of chemical-resistant applications.
FRP also has a history of over 40 years in seawater applications, such as in power utility seawater cooling piping and desalination piping systems. It has proven to be a reliable and cost effective material in many acid environments such as sulphuric, hydrochloric and phosphoric acids for a variety of process applications, and in gas scrubbing applications, FRP and dual laminates have been effective materials of construction for handling gas services containing mineral salts and weak acids in the flow stream.
FRP laminates are versatile and can be readily enhanced to develop abrasion-resistant and conductive qualities. While maintaining its chemical resistance, abrasion resistant qualities are developed in FRP laminates by adding solids such as a silicon carbide, alumina oxide or ceramics to the resin mixture for construction of the corrosion barrier inner lining.
Where combustible organics, such as kerosene, are in a dissolved effluent, FRP can be made conductive by introducing a carbon veil and/or carbon graphite powder into the first stage of the corrosion barrier construction. With this conductive corrosion barrier any static charge can be dissipated to a grounding connection and safely to ground. Conductive veils have proven to be a reliable means to assist in managing these risks associated with combustible and corrosive services.
In lixiviation, FRP is used throughout the solvent extraction process. FRP tanks handle the storage of acids, process effluents and mineral salts, and in many applications, can be expected to provide low maintenance service for over 20 years, assuming proper design, fabrication and operation.
FRP manufacturing technology has significantly advanced the construction and quality of large diameter field fabricated FRP tanks. Today these are being built in sizes over 30m in diameter, and FRP piping for process fluid transfer are regularly designed for applications over 1.5m in diameter.
In high volume desalination services, FRP piping has been built up to 4m in diameter. After 40+ years of service, FRP seawater systems have been internally inspected and found to in serviceable condition with little to no maintenance required.
Piping system reliability should be independent of the material of construction. If proper material choices are made for a given application, the expectations of system performance should be high with few maintenance concerns.
Thermal expansion is a notable consideration in system design. Depending on the type of laminate construction and design temperatures of the piping system, FRP will expand 2.5 to 3 times the comparative rate of carbon steel.
At typical process temperatures ranging from 60-95oC, thermal expansion can be a governing factor in the system design. Depending on the geographic location of the plant, occasional loads such as wind and seismic can be significant factors as well. In these applications, it is frequently necessary to evaluate FRP piping systems using a formal pipe stress/flexibility program to conduct a comprehensive engineering evaluation to validate the system performance within the defined limits of the governing code.
A comprehensive evaluation should include project- or vendor-specific material properties and realistic estimates of stiffness for equipment connections, supports and supporting steel, as these are rarely truly rigid.
Unreasonable estimates of support stiffness and boundary conditions can result in unjustified and excessive piping loads on equipment and excessive predictions of system stresses. Qualified estimates of input data will yield the most reliable and defendable results for system engineering.
To achieve quality expectations during fabrication and installation of FRP equipment, the project must rely on well-defined standards, detailed and descriptive specifications and the best industry practices defined within those documents.
Time will be well-spent to develop a clear and concise equipment specification. This time will be appreciated in the heat of the project when questions are raised, which are critical to the quality and performance of the equipment.
During the fabrication and installation of piping and large equipment, such as process vessels, stacks and linings, joints are made by laminate bonding. Whether in the shop or in the field, a reasonable measure of environmental controls- managing temperature, humidity and cleanliness- is required to achieve consistent results. A clean and dry work environment is required to produce satisfactory laminates.
FRP piping and field fabricated tanks are installed all over the world, and in the range of climatic conditions. A desirable working temperature for most resin systems lies between about 10 and 45oC. Where ambient conditions are more extreme, measures should be taken to control and improve the work environment, such as tarping for shade or rain protection and tenting, where a portable temporary enclosure is needed to further condition the immediate work space. These are typical jobsite measures to manage the elements during an FRP installation.
From a cost and delivery standpoint, FRP can be an attractive material choice for hydrometallurgy. As with most projects, the key to success is managing resources and taking care of the details.
FRP is a mainstream material of construction for an abundance of corrosive applications. The continued development of best practices and quality standards will only build greater confidence in FRP for these applications.
If you’re looking for more information about the kind of work Maverick Applied Science does, check out our various projects.Learn More...
Plant Owners and Engineers are becoming much more knowledgeable regarding design concerns and expectations for nonmetallic piping systems....
Waste incineration facilities dispose of some of the most hazardous and corrosive by-products in the chemical process industry. Find out why FRP...
Expansion Joints for FRP Connections
Presented by Rob Coffee at the 2nd Annual Plant Engineer’s FRP Forum. Rob is the VP of Sales and Marketing at Proco Products, Inc., and he discusses Expansion Joints for FRP Connections. |
Just as the world needs doctors, lawyers and CPAs, it also needs heating, ventilation and air conditioning designers and technicians; welders to build schools and plants; certified fire life safety professionals to ensure a building doesn't place lives in jeopardy; professionals to create building systems to keep occupants safe, comfortable and breathing clean air; industrial workers who construct plants for power and sustainable energy; and technicians to conduct energy audits to keep buildings operating efficiently.
These high-tech skills take education, dedication and talent. They are necessary career paths important to the proper functioning of the country and are there for those who take interest in a different kind of work.
Your career in sheet metal starts with a click. Browse the sections below to learn more about the different paths available to you as an apprentice. |
The Kennedy Mine in California’s Gold Country, is famous for being one of the deepest gold mines in the world (at 5912 feet) and demonstrates how gold changed an entire way of life in California.
Prospected in 1860, reorganized in 1886 and continuously run until 1942, the Kennedy Gold Mine produced approximately $34,280,000 in gold. It still has one of the tallest head frames in existence today.
In 1928 a surface fire burned all the structures except two. All other buildings and foundations were built after 1928. The company operated the mine until 1942 when the U.S. Government closed gold mines because of the war effort.
On August 27, 1922, when forty-seven miners were trapped by firein the nearby Argonaut Mine 4,650 feet below ground, rescue efforts were launched from the Kennedy Mine to connect the tunnels of the two mines. Unfortunately progress was slow and rescuers arrived too late to save the miners in the Argonaut. The Argonaut mine incident was the worst gold mine disaster in US history.
The head frame is 135 feet tall, and composed of iron beams. The original 100 foot tall wooden headframe burned (along with most of the surface developments) in 1928. Debris from the burning headframe fell down the main shaft, blocking any chance of exit for the miners still working thousands of feet below. In the aftermath of the Argonaut Mine disaster, a connecting tunnel was left in place at the 4,600 foot depth between the two mines. This tunnel allowed the miners to escape unharmed.
The mine office is the best preserved building on the mine property. It was built in 1908, and was one of the only buildings to survive the 1928 fire. The ground floor was the assay office, which tested ores at the end of every day, so the quality of ore was always known. The second floor contained mine offices, the safe, and the pay room. Miners made around $4 a day in the 1920s, and specialists (foremen, carpenter, assayers, and blacksmiths made almost twice as much). These were fairly reasonable wages for the time. The third floor contained four bedrooms which were used to house investors when they paid a visit to the mine. They mostly lived in San Francisco and the journey could be long and dusty.
Pioneer miners tribute.
The huge steel head frame whose pulleys guided the miners one mile down into the bowels of the earth. God I wish I could get down there.
There were two big blast furnaces located on the property, one for separating the gold from the mercury, and the second for melting the gold and producing ingots for delivery to the mint in San Francisco.
The explosives vault.
Ore was hoisted to the surface of the mine, and processed in a stamp mill on the slope below the headframe. The Kennedy Mine had one of the largest stamp mills in the entire Mother Lode, with 100 stamps. Each stamp weighed nearly half a ton, and they were in constant motion, vertical hammers rising and falling, crushing the ore into a sand-like consistency. Mercury and other “benign” chemicals were used to separate the gold from the waste material. Mercury combines with gold to form a solid alloy called amalgam.
The mine waste was a problem. It was full of sulfide minerals that converted to acids on exposure to the atmosphere, and water in the town below was being fouled. In the early 1900s, a system of buckets and giant wheels was constructed to carry tailings over a nearby ridge to a reservoir that could isolate the poisons from the domestic water supply.
The Kennedy Mine near Jackson, CA is open every Saturday, Sunday and holiday from 10 AM to 3 PM, March through October. Admission is $10.00 for ages 13 to adult, $6.00 for youngsters 6 through 12, and free to those under 6. Admission includes a FREE guided tour. Guided Tours are recommended for an interesting, in-depth, and educational tour of the grounds [about 1.5 hours]. |
It's easy for businesses to keep track of what we buy, but harder to figure out why. Enter a nascent field called neuromarketing, which uses the tools of neuroscience to determine why we prefer some products over others. Harvard Business School marketing professor Uma R. Karmarkar explains how raw brain data is helping researchers unlock the mysteries of consumer choice.
BY CARMEN NOBEL
In the early 1950s, two scientists at McGill University inadvertently discovered an area of the rodent brain dubbed "the pleasure center," located deep in the nucleus accumbens. When a group of lab rats had the opportunity to stimulate their own pleasure
centers via a lever-activated electrical current, they pressed the lever over and over again, hundreds of times per hour, forgoing food or sleep, until many of them dropped dead from exhaustion. Further research found pleasure centers exist in human brains, too.
Most humans are a little more complicated than rats. But we are largely motivated by what makes us feel good, especially when it comes to our purchasing decisions. To that end, many major corporations have begun to take special interest in how understanding the human brain can help them better understand consumers. Enter a nascent but fast-growing field called neuromarketing, which uses brain-tracking tools to determine why we prefer some products over others.
"People are fairly good at expressing what they want, what they like, or even how much they will pay for an item," says Uma R. Karmarkar, an assistant professor at Harvard Business School who sports PhDs in both marketing and neuroscience. "But they aren't very good at accessing where that value comes from, or how and when it is influenced by factors like store displays or brands. [Neuroscience] can help us understand those hidden elements of the decision process."
To be sure, there is a clear difference between the goals of academia and the goals of a corporation in utilizing neuroscience. For Karmarkar, her work falls into the category of decision neuroscience, which is the study of what our brains do as we make choices. She harbors no motive other than to understand that process and its implications for behavior, and draws on concepts and techniques from neuroscience to inform her research in marketing.
For corporations, on the other hand, the science is a means to an end goal of selling more stuff. But the tools, once restricted to biomedical research, are largely the same. And Karmarkar expects brain data to play a key role in future research on consumer choice. (In a recent background note on neuromarketing, she discusses the techniques that have helped researchers decode secrets such as why people love artificially colored snack food and how to predict whether a pop song will be a hit or a flop.)
Tricks of the trade
When tracking brain functions, neuroscientists generally use either electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) technology. EEG measures fluctuations in the electrical activity directly below the scalp, which occurs as a result of neural activity. By attaching electrodes to subjects' heads and evaluating the electrical patterns of their brain waves, researchers can track the intensity of visceral responses such as anger, lust, disgust, and excitement.
Karmarkar cites the example of junk-food giant
That data in hand, Frito-Lay moved ahead with an ad campaign called "The Orange Underground," featuring a series of 30-second TV spots in which the Cheetos mascot, Chester Cheetah, encourages consumers to commit subversive acts with Cheetos. (In one commercial, an airline passenger quietly sticks Cheetos up the nostrils of a snoring seatmate. Problem solved.) The campaign garnered Frito-Lay a 2009 Grand Ogilvy Award from the Advertising Research Foundation.
EEG vs. fMRI
Karmarkar notes that EEG and fMRI have different strengths and weaknesses, and that EEG has some limitations in its reach. "The cap of electrodes sits on the surface of your head, so you're never going to get to the deep areas of the brain with EEG," Karmarkar explains.
The fMRI uses a giant magnet, often 3 Teslas strong, to track the blood flow throughout the brain as test subjects respond to visual, audio, or even taste cues. The technology has its own logistical limitations. Running an fMRI scanner costs researchers up to $1,000 per hour, and studies often use 20-30 subjects, Karmarkar says. And while EEG lets subjects move around during testing, fMRI requires them to lie very still inside a machine that can be intimidating.
"This is a sophisticated piece of medical equipment that exerts a very strong magnetic field at all times, and it's important to be very careful around it," Karmarkar says. "For example, you cannot take metal into a magnet room!"
But fMRI is invaluable to neuroscience and neuromarketing in that it gives researchers a view into the aforementioned pleasure center. "The more desirable something is, the more significant the changes in blood flow in that part of the brain," Karmarkar says. "Studies have shown activity in that brain area can predict the future popularity of a product or experience."
In her note, Karmarkar discusses research by Emory University's Gregory Berns and Sara Moore, who connected the dots between neural activity and success in the music industry. In a seminal lab experiment, teenagers listened to a series of new, relatively unknown songs while lying inside an fMRI machine. The researchers found that the activity within the adolescents' pleasure centers correlated with whether a song achieved eventual commercial success. The OneRepublic song Apologize performed especially well in both the brain scans and the market.
"Importantly, Berns and Moore also asked their original study participants how much they liked the songs they heard, but those responses were not able to predict sales," Karmarker's note states, illustrating the marketing value of subconscious cerebral data.
Neuromarketing can provide important but complex data to companies that target a global audience. While product testing may provide similar neural responses in American and Asian subjects, for instance, the marketing implications may be very different.
"Expressions of happiness in some Eastern cultures are expressed as a sense of calm or peace, whereas in some Western cultures, happiness means jumping around with joy and excitement," Karmarkar explains. "So you might get two totally different fMRI results that actually mean the same thing—or you may have two totally different stimuli create the desired effect of profound happiness, but for different reasons. If you get an excited effect in an Eastern market, it may not be a good outcome, even though that was the effect you wanted in a Western market. On the other hand, a sense of peace might be misconstrued as a failure."
For businesses looking to enlist the services of a neuromarketing company, she advises watching out for consultanting firms that claim to offer such services but don't really have the technology or expertise to back up the claim. Rather, look for a company whose employees have a healthy, skeptical respect for neuroscience.
"The rubric for picking a good [firm] is making sure it was started by a scientist, or has a good science advisory board," Karmarkar says. "This is a field where scientists are very, very skeptical, and we should be. It's easy to feel like you've discovered some big, important truth when you see that the brain has done something that correlates with behavior. And it's just as easy to overstate our conclusions."
For consumers, the idea of giving advertisers additional insight into the subconscious mind might prompt privacy concerns. But Karmarkar says that the research is more about understanding brain waves, not controlling them.
"It's similar to the concerns about genetics," she explains. "People wonder, now that we can map the genome, are we going to manipulate the genome? I think it's a valid and important question to ask. But I don't think it's the direction that companies should take or that academics are taking."
She adds, though, that we need to keep in mind that advertisers have been successfully controlling our brains, to some extent, since long before the existence of EEG or fMRI technology.
"Imagine Angelina Jolie biting into an apple," she says. "It's the juiciest apple ever. She's licking her lips. There's juice running down her chin. Now if I spend some time setting up that scenario and then follow up by asking you to tell me how much you like Mac computers, I promise you that you'll rate them more highly than you would have if I hadn't just talked about how great that apple was for Angelina Jolie. So, yes, I just used your brain to manipulate you. Sex sells, and it has since the dawn of time. It sells because it engages that pleasurable reward center of your brain. As academics, neuroscience just helps us to understand how."
About the author: Carmen Nobel is senior editor of Harvard Business School Working Knowledge. |
Your letters will be more successful if you focus on positive wording rather than negative, simply because most people respond more favorably to positive ideas than negative ones. Words that affect your reader positively are likely to produce the response you desire in letter-writing situations. A positive emphasis will persuade the reader and create goodwill. In contrast, negative words may generate resistance and other unfavorable reactions. You should therefore be careful to avoid words with negative connotations. These words either deny—for example, no, do not, refuse, and stop—or convey unhappy or unpleasant associations—for example, unfortunately, unable to, cannot, mistake, problem, error, damage, loss, and failure.
See more below:
When you need to present negative information, soften its effects by superimposing a positive picture on a negative one.
- Stress what something is rather than what it is not.
- emphasize what the firm or product can and will do rather than what it cannot.
- open with action rather than apology or explanation.
- avoid words which convey unpleasant facts.
Compare the examples below. Which would be more likely to elicit positive reader response?
- Negative: In response to your question about how many coats of Chem-Treat are needed to cover new surfaces: I regret to report that usually two are required. For such surfaces you should figure about 200 square feet per gallon for a good heavy coating that will give you five years or more of beautiful protection.
- Positive: In response to your question about how many coats of Chem-Treat are needed to cover new surfaces: One gallon is usually enough for one-coat coverage of 500 square feet of previously painted surface. For the best results on new surfaces, you will want to apply two coats.
- Negative: Penquot sheets are not the skimpy, loosely woven sheets ordinarily found in this price class.
- Positive: Penquot sheets are woven 186 threads to the square inch for durability and, even after 3-inch hems, measure a generous 72 by 108 inches.
- Negative: We cannot ship in lots of less than 12.
- Positive: To keep down packaging costs and to help customers save on shipping costs, we ship in lots of 12 or more.
In addition, you should reemphasize the positive through embedded position and effective use of space.
- Place good news in positions of high emphasis: at the beginnings and endings of paragraphs, letters, and even sentences.
- Place bad news in secondary positions: in the center of paragraphs, letters, and, if possible, sentences.
Effective Use Of Space
Give more space to good news and less to bad news.
Evaluate the examples below to determine whether or not they present negative information favorably.
- To make the Roanoke more stable than other lamps of this size, our designers put six claw feet instead of the usual four on the base and thus eliminated the need for weighting. Claw feet, as you know, are characteristic of 18th-century design.
- No special training programs are normally offered other than that of the College Graduate in Training rotational training period. We do not expect our employees to continue their education, but we do have an excellent tuition refund program to assist in this regard (see Working with General Motors, page 8). Where an advanced degree is essential, individuals are recruited with those particular advanced degrees. Both Butler and IUPUI offer courses leading to an MBA degree.
- With our rigid quality standards, corrections of Adidas merchandise run less than .02 percent of our total line. Because of an oversight in our stitching department, a damaged needle was inadvertently used and caused the threads to come loose in these particular bags. Since we now have a check on all our machine needles before work each day, you can be assured that the stitching on our Adidas carrying bags will last the lifetime of the bags. Thank you for calling our attention to the loose stitching.
- We are sorry that we cannot furnish the club chairs by August 16.
- I have no experience other than clerking in my father’s grocery store.
- ABC Dog Biscuits will help keep your dog from getting sick. |
Technological scheme of reinforcement and reinforcement calculation
Reinforcement of the foundation - a process necessary for thestrengthening the design and increase the lifetime of the building. In other words, it is the assembly of the "skeleton", which acts as a protective component of the deterrent soil pressure on the base wall. But in order for this feature has been implemented to the maximum extent, it is necessary not only to competently produce reinforcement calculation for strip foundation, but also know how to organize the course of construction works.
- As reinforced strip foundation
- The circuit construction of the reinforcing structure
- Calculation of material consumption
As reinforced strip foundation
Basis of belt type foundation - concreteA solution consisting of cement, sand and water. Unfortunately, the physical characteristics of the building materials do not guarantee the absence of deformation of the building foundations. To increase the ability to withstand the shifts of the foundation, temperature, and other negative factors, it is necessary the presence of the metal structure.
This material is plastic, but it provides a secure fit, so reinforcement - a significant step in the complex.
Reinforcing foundation is required in places wherethere may be tension zone. It is noted that the greatest tension occurs at the surface of the base, which creates prerequisites for reinforcement, approximate to the upper level. On the other hand, in order to prevent corrosion of the frame, it must be protected from external influences concrete layer.
Important! The optimal distance for the foundation fittings - 5 cm from the surface.
Since the promotion of deformation to predictimpossible stretching zones may occur in the bottom (when the middle bend down) and at the top (when the frame is curved upward). Accordingly, the reinforcement must pass the top and bottom valves 10-12 mm in diameter, and this belt reinforcement for the base should have a ribbed surface.
This achieves a perfect contact with the concrete.
The remaining part of the skeleton (horizontal and vertical transverse rods) can have a smooth surface and a smaller diameter.
When monolithic reinforcement strip foundation, which is typically less than the width of 40 cm, allowed to use 4-reinforcement rods (10-16 m) connected to the frame 8 mm.
Important! The distance between the horizontal bars (with a width of 40 cm) - 30 cm.
Strip foundation has, at great length,narrow width, so it will appear in the longitudinal stretching, while the cross does not. From this it follows that the vertical and horizontal cross bars which are smooth and thin, only need to create a frame, and not for taking loads.
Particular attention should be paid to the reinforcementangles: there are cases when the deformation is necessary not to the middle, and at the corner portions. Angles should be reinforced so that one end of the valve element bent in a left wall, and the other - to the other.
experts advise to connect the bars with the help ofwire. After all, not every sort of reinforcement made from steel, which can be welded. But even if the admissibility of the welding often have problems that can be avoided by using a wire, for example, overheating of steel, lead to a change in the properties, the thinning of the rod at the weld, inadequate weld strength, etc.
The circuit construction of the reinforcing structure
Reinforcement begins with the installation of the formwork, the inner surface of which is laid out with parchment, allowing easier removal in the future design. Building a frame made under the scheme:
1. The trenches driven into the ground reinforcing rods equal to the depth of foundation length. Observe distance from the formwork of 50 mm and a pitch of 400-600 mm.
2. At the bottom of the installed base (80-100 mm), which are necessary to lay the bottom row of 2-3 thread fittings. As stands quite amiss bricks set on edge.
3. The upper and lower range of the valve is fixed with transverse webs to the vertical rods.
4. The intersection produces fastening using wire link or welding.
Important! Should strictly abide by the distance to the outer surfaces of the base of the future. Do it better with bricks. This is one of the most important conditions because metal structures should not be based directly on the bottom. They must be raised above ground level for at least 8 cm.
After installation, the valve is to make ventilation holes and pour concrete solution.
This need to know!
Vents not only contribute to the damping characteristics of the foundation, but also to prevent the occurrence of putrefaction.
Calculation of material consumption
To calculate the product strip foundationyou need to know in advance some of the parameters. Consider an example. Suppose, our foundation has a rectangular shape and the following dimensions: width - 3.5 meters, length - 10 meters, casting height - 0.2 m, width of the belt - 0.18.
First of all, you want to calculate the total volumecasting, which you need to know the size of the base as if he was in the form box. To do this, we make a few simple manipulations: learn the base perimeter, and then multiply the perimeter of the width and height of the casting.
P = AB + BC + CD + AD = 3.5 + 3.5 = 10 + 10 = 27
V = 27 x 0.18 = 0.2 x 0 972
But this calculation is not a monolithic foundationends. We learned that the base itself, or rather cast, rounded occupies a volume equal to 0.97 m3. Now you need to know the volume of the inside of the base, ie, that is within our tape.
Get the amount of "stuffing": multiply the width and length of the base casting on the height and know the total amount of:
10 x 3.5 x 0.2 = 7 (cubic meters)
Subtract the volume of the casting:
7 - 0.97 = 6.03 m3
Bottom line: the volume of the casting - 0.97 m3, the internal volume of the filler - 6.03 m3.
Now we need to calculate the number of fittings. Suppose the diameter will be 12 mm in the casting - two horizontal threads, ie Rod 2, and vertically, for example, rods are placed every two feet. Perimeter known - 27 meters. So, 27 is multiplied by 2 (horizontal bars) and get 54 meters.
Vertical bars: 54/2 + 2 = 110 rods (108 intervals of 0.5 m and two on the edges). We add one more rod at an angle and get 114 bars.
. For example, the rod height - 70 cm turns: 114 x 0.7 = 79.8 meters.
The final touch - formwork. Suppose, we build it from the board is 2.5 cm thick and 6 meters long and 20 cm wide.
We calculate the area of the side surfaces of: the perimeter multiplied by the height of the casting, and then by 2 (with a margin, taking into account not decrease against the inner perimeter of the outer): (27 x 0.2) x 2 = 10.8 m2
The area of the board: 6 x 0.2 = 1.2 m2; 10.8 / 1.2 = 9
We have 9 boards 6 m long. Do not forget to add the board to connect (at your discretion).
Result: 1 m3 of concrete is required; 6.5 m3 of aggregate; Valves 134 meters and 27 meters of the boards (20 cm wide) bars and screws. The values are rounded.
Now you know not only how toreinforced strip foundation, but also how to calculate the required components. This means that the base built by you, will be a reliable and strong, permitting the construction of monolithic structures of any configuration. |
Author : Lara Ritchie.
Published : Wed, Dec 12 2018 :5 AM.
Format : jpg/jpeg.
Flowcharts were originally used by industrial engineers to structure work processes such as assembly line manufacturing. Today, flowcharts are used for a variety of purposes in manufacturing, architecture, engineering, business, technology, education, science, medicine, government, administration and many other disciplines.
Using just a few words and some simple symbols, they show clearly what happens at each stage and how this affects other decisions and actions. In this section, we look at how to create and use flow charts, and explore how they can help you to solve problems in your processes.
Flow Chart Ofrces Humanrce Department Flowchart Water Process Natural
Flow Chart Of Resources Rob Bot Teaching On Twitter Teach Students To Write Flowchart Samples Human Lsu Natural
Flow Chart Ofesources Human Flowchart Stock Vector Illustration Corporate For Classesource System Lsu
Flowchart Of The Water Resources And Services Development Process Samples Flow Chart Human Mineral
Flow Chart Ofources Classification Energy Humanource Department Flowchart Prepare Showing The
Draw Flowchart Of Classificationurces Flow Chart For Class Mineral Human Lsu Prepare Showing The
Flow Chart Of Resources The Food Worker Illness Flowchart Samples Human Process Water
Energy Flow Chart Of Resources Flowchart Samples Human Structure Management Resource Department Mineral
Flowart Of Resources Flowchart Samples Water Class Human Cycle Natural With Meaning And Examples
Flow Chart Of Naturalources Humanource Department Flowchart For Class Make Showing Classification
Flow Chart Of Resources Court System 3l Charts Pinterest School Flowchart Samples To Classify Natural
Flow Chart Ofes Umbraspaceindustriesmks Wiki Github Flowchart Samples Classification Energy Water Class Human
Flow Chart Of Resources Natural With Meaning And Examples Human Flowchart Prepare Showing Theon Draw
Flow Chart Of Resources Flowchart Samples Draw To Show The Classification On Conservation
Flow Chart Showing Classification Ofces Draw Flowchart Class Energy Human
Flowhart Of Resources Flowchart Samples Human Resource Departmentlassificationlass Minerals And Energy
Flow Chart To Classify Naturalrces Showing Classification Of Draw Flowchart Show The Prepare
Flow Chart Of Resources Hr Line Color Icons Element Human Icon For Resource Department Flowchart Draw Classification
Flow Chart Ofurces Flowchart Samples Natural For Class Classification
Flowchart Samples Blank Template Lucidchart Flow Chart Of Resources Classification Human Process Prepare
Flow Chart Ofces Global Architecture Climate Finance World Institute Flowchart Samples Classification Energy Draw
Flowchart Samplesow Chart Of Natural Resources With Meaning And Examples Classification Energy Human Structure |
We often talk about morale in the workplace. It’s a factor we strive to improve. Even without formal measurement, employers and their staff equate high morale to high performance. Dwight D. Eisenhower once said, “Morale is the greatest single factor in successful wars.” But why? How much impact does morale truly have?
The Merriam-Webster dictionary defines morale as, “the mental and emotional condition (as enthusiasm, confidence, or loyalty) of an individual or group with regard to the function or tasks at hand,” or, “a sense of common purpose with respect to a group.” Morale is more than just employee happiness or satisfaction, and it impacts performance in the workplace.
What affects morale?
As individual employees, we all come to work with a finite amount of mental energy to spend. It’s as if we each have a bucket and it is full at the start of our work day. Your bucket has two spigots, or valves, through which your energy may flow. One valve funnels the energy into success, or productivity – your tasks and job. The second spigot diverts this mental energy into self-maintenance and repair, or coping. Stress often opens this valve to some degree – the more stress you feel, the wider this valve opens, siphoning off valuable energy that could be flowing into the productivity valve. And, with both valves open, mental energy drains faster. To avoid being drained too quickly, the productivity valve will slowly begin to close.
The impact of stress on morale and subsequently performance is real. If you want to increase performance, we need to keep the productivity valve open and minimize coping. There are countless ways to do this, including:
- Having a clear purpose and goals for your job or task.
- Connecting with others in the workplace, or feeling part of a team.
- Communicating clearly and effectively between individuals and groups.
- Having decent work-life balance.
- Feeling confident in your ability to complete the task at hand.
- Using your skills to their fullest potential.
- Seeing clear career opportunities.
- Having enough resources and being able to effectively prioritize your work.
For example, an employee named Jen has four projects with the same deadline. She stays late at night to continue working to meet the deadline, but she feels like she’s missing out on valuable “home” time with the family. That stress lowers her overall morale, which may affect her work on other projects.
The most efficient way to improve morale is to identify what’s causing the coping spigot to open. If there’s a specific type of workplace stress triggering a decrease in morale, you can take steps to address the issue. In Jen’s case, she might be missing key information that would help prioritize her four projects.
Measuring stress and morale doesn’t have to be complex – there are several assessments and tools available that can identify performance blockages.
You can even get basic measurements by creating your own survey. Simply asking for an individual’s morale, on a scale of 1 (terrible) to 10 (excellent), can allow you to determine if there are factors present that are opening the self-maintenance spigot. Checking employee morale on a regular basis is important for creating and maintaining a healthy workplace.
What steps do you take to improve morale? |
Some people are leaders and others are followers. Are you a mixture of both? Do you know what you must do as a business leader? Many people haven’t mastered leadership skills. Read on to learn what it takes to become a good leader.
When working to hone leadership skills, you cannot go wrong by staring with honesty. Your job will be to lead others in the right direction. If you are an honest leader, people will see that and have a great appreciation for it. When those you are leading learn how important honesty is to you, it will help to breed honesty in them as well.
Make sure to show appreciation for those around you. It doesn’t take long to write something that says thank you or good job, and that may mean quite a bit to those that work hard all day long. It’s free to do, and means so much to others.
Always remain approachable. Some people think that leaders should be intimidating. It is not a good strategy, however; it only makes your team dislike you. Let subordinates know that they can bring you any concerns they have.
Prepare yourself thoroughly prior to meeting with the team. Try to anticipate likely questions. Sit down and think of a good response to each question. It’s this kind of preparation that builds respect. It is also a great time-saving method.
You should schedule some time every day to go over just how well things are running at work. You could appoint a few people to provide daily input. Suggestions can be made, changes can be discussed, and friends can be made as well.
To be a great leader, know what your weaknesses and strengths are. Too much confidence is only going to set you up for failure as a leader. Work on improving weak skills.
Mean everything you say. Leadership starts with being accountable for your words and actions. Your words and actions reflect on your company and your team. If you have made missteps or errors, you must acknowledge them. Don’t expect it to be overlooked or allow others to do it for you.
Work to build cooperation within your team. Be there so that your employees can talk about issues and so you can give your best answers. Allow your staff to do their jobs and avoid interfering if possible.
Successful leaders take the time to listen to their employees and seek out their feedback on workplace issues. They may have ideas for new products or how to improve production. It’s possible you will hear some criticism, but don’t let that deter you. Acknowledging the opinions of your workforce will build trust.
Now you should have the confidence to become a great leader. Be confident, and soon you will see others following you. Use the advice you learned here to lead those in your life instead of being content to follow.` It’s possible to get the exact right things to happen and to have your colleagues help you with it. |
Often it can be hard to understand the profound difference between effort and duration.
So, let us take a moment to examine this distinction and why it is important.
During the estimation and planning process, the effort and duration are determined for the planned tasks.
Effort is the amount of work units required to complete any given task. Effort may also be referred to as man-hours, man-days, man-weeks, man-months, or even man-years. In order to determine the task duration, the effort required to complete the task must be determined first.
Duration is the calendar time required to execute any given task. Duration is measured in hours, days, weeks, months, or years. Duration can only be calculated once we determine who will perform the task, how many people are going to perform the task, and whether they are available to perform the task at a reliable level of availability.
Once you have an effort estimate, you have to estimate the duration. This is closely related to constructing a draft schedule, and inherently involves decisions on how many people you will put on the project. Headcount can, to a certain extent, be traded for schedule, but remember that there is still a minimum duration for some tasks, e.g. it is impossible to make a baby in one month by putting nine women to work on the task.
If you have 2 teleporters that have to be set up, and only one teleportation expert, you are not going to be able to do those three tasks in parallel. Thus the duration will be longer but the effort will remain the same.
Installation and set up of 2 teleporters is estimated to take 80 hours.
If you have two teleportation experts committed to 40 hours per week, the duration would be 5 calendar days.
If there is one teleportation expert committed to 40 hours per week, the duration would be 12 calendar days.
If there is one teleportation expert committed to 20 hours per week, the duration would be 26 calendar days.
These two terms relate strongly to schedule, which is the project timeline. This involves identifying the dates (absolute or relative to a start date) that project tasks will be started and completed, resources will be required and upon which milestones will be reached.
When you are working towards a deadline, understanding and tracking the difference between duration and effort will allow you to schedule the time to spend on other tasks and still make your deadline.
A few classical quotes on this topic from Frederick Brooks book The Mythical Man-Month: Essays on Software Engineering
Men and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them. This is true of reaping wheat or picking cotton; it is not even approximately true of systems programming. The added burden of communication is made up of two parts: training and intercommunication.
Each worker must be trained in the technology, the goals of the effort, the overall strategy, and the plan of work. This training cannot be partitioned, so this part of the work varies linearly with the number of workers. Since software construction is inherently a systems effort - an exercise in complex interrelationships - communication effort is great, and it quickly dominates the decrease in individual task time brought about by partitioning. Adding more men lengthens, not shortens, the schedule.
I hope this post has clarified how to use the two terms properly. |
KPI Benchmarks : Scrap Rate
- Benchmark Range
- Benchmark Average
- Benchmark Sample Size (n) 20
* Is High or Low Best: Lower is Better
Scrap Rate measures the quality of the production output of the Manufacturing function. A high value for this KPI likely indicates poor raw materials inputs, careless production setup procedures, faulty machinery or ineffective production operators, all of which will result in increased costs to the company and slow down day-to-day operations. A very high Scrap Rate can also lead to an inability to produce enough finished goods to fulfill customer orders. Scrap Rate can be reduced by increasing training for programmers and operators, documenting product data throughout the process, or utilization of scientific approaches, such as Six Sigma or Multivariate Testing.
The number of units produced that must be scrapped because of product defects or errors divided by the total number of units produced by the manufacturing group over the same period of time, as a percentage.
KPI Best Practices
- Preventative maintenance performed regularly on manufacturing equipment
- Use lean manufacturing methods such as Six Sigma, MVT, etc.
- Use high quality raw materials and components for production runs
KPI Calculation Instructions Scrap Rate?
Two values are used to calculate this KPI: (1) the number of units that are scrapped during the production process, and (2) the total number of units produced during the same period of time. Scrapped units are defined as any units of the production output that are not good units or units that are reworked. Good units are defined as units that pass inspection and are approved for sale or use as a component in another production run. Include good units, reworked units and scrapped units in the denominator of this calculation.
KPI Formula :
(Number of Units Scrapped / Total Number of Units Produced) * 100 |
Okay, not the original corporation, but one of the most powerful institutions that has ever existed. Also called the “Honourable East India Company” or the “British East India Company” and informally as “John Company,” this joint-stock company at one time controlled half the world’s trade.
A joint stock company is a business venture in which individuals own shares of a company. Each share is worth an equal percentage. But by dividing the ownership into shares, different people can own different percentages of the company. (A person who owns 20 shares owns twice as much of the entity as a person who owns 10.)
China created the first such businesses, but the earliest European example was in France in about 1250. In England. this sort of thing was jump-started by the discovery of the New World. In 1553, The East India Company received a royal charter from Queen Elizabeth I. Royalty and rich men owned the shares.
The charter awarded the newly formed company a monopoly on trade with all countries east of the Cape of Good Hope and west of the Straits of Magellan for a period of fifteen years. Anybody else who traded in that half of the world was in breach of the charter. If the trader had not been issued a license from the Company, it would forfeit their ships and cargo (half of which went to the Crown and the other half to the Company). Owners of such a company could also face imprisonment at the "royal pleasure."
|The ship Red Dragon was the East India Company's first war ship.|
The first two successful voyages were made in 1601 (returning in 1603) and 1604, while a third voyage lasted from 1607 to 1610. Though the company originally struggled, facing stiff competition from the Dutch East India Company, when they began to start factories for the processing of pepper and other spices, profits began to grow.
King James I of England renewed the Company’s charter in 1609, allowing the Honorable East India company an indefinite monopoly on trade in the East. In 1612, he authorized agents of the Company to act as ambassadors to the rulers of India, the Moghul Empire. This resulted in more factories and more trade.
Not content to have the only trading rights between India and England, the Company began a series of battles with its competition, the Dutch East India Company and the Portuguese and the Spanish. By 1717, the monopoly existed from both ends. India had also given John Company exclusive trading rights
The English crown continued to grant favors to the East India Company. By 1690, it was able to mint its own money, acquire new self-directed territory, build forts and castles, raise and command armies, and literally to wage war. For all intents and purposes, the Company was a sovereign nation. It used this power to gain influence in China and even Japan.
Then in 1690, Henry Avery, pirate captain extraordinaire, captured an Indian trading ship belonging to the Grand Moghul, and carrying at least one member of the Royal Family. Avery’s men terrorized the passengers and crew of the ship, made off with some £600,000 in gold and jewels (this amount is in the money of the time, and could be said to be worth roughly two BILLION dollars in today’s money.)
The Indian government gave the strongest possible protest, and England’s government responded by offering roughly a million dollars reward for Avery’s capture, and making him ineligible for any future pardons offered to other pirates.
Despite this, rioting broke out near the East India Company’s holdings. Four of its factories were seized and destroyed. Company officers were jailed, and nearly lynched by angry Moguls.
At about this time, the company’s monopoly was rescinded by the English crown. New trading companies were started. But the East India was so firmly entrenched in the region’s trade that no real competition took place. Shares of the rival companies were bought up by the East India’s officers and main shareholders, and the major competitor, the English Company Trading to the East Indies, was absorbed by the older company.
The British government began to make efforts to re-assert control, but a series of skirmishes between Britain and France caused the government to renew and continue granting extensions to the monopoly. In return, the Company loaned the government £1,000,000. When war finally broke out, it was between the Company and France. And it took place on Indian soil. The Company was victorious again.
Over the next several years, The East India built a private army and navy, the strongest in the region, and conquered fresh territory, all in the name of the English Crown. Territories conquered by the Company became the direct property of the English ruler (as opposed to territories granted to Britain by treaty, which belonged to the nation.)
The company began to trade in the materials used to make gunpowder, started and won a war with China, and took over almost the entire Indian sub-continent.
But you can’t rule a country the way a business is run. Pandemics, uprisings, and famines took their toll on the lands run by the Company, and in 1857, the British government formally took over ruler ship of the vast lands it had conquered.
And what has all this to do with pirate in the Caribbean? Not very much. Though pirates like Avery did cruise the Indian Ocean looking for (and often winning) plunder and riches, the East India Company did not ever directly control territory in the Caribbean.
So why does John Company appear in pirate tales as the Bad Guys? For one thing, the Company was outside of any government control for most of its history. Its Board of Directors, managers, and employees committed any number of atrocities against various non-European natives, while its official monopoly prevented many daring and talented traders from ever making the fortunes that they might otherwise have amassed.
And, quite frankly, the giant, impersonal business makes just the foil for freedom-loving pirates. So, 300 year later, we can make-believe that the Honorable East India came to the Caribbean and caused trouble for our friends, the pirates
It may not be true, but it makes a great story. |
Rick Perry said he would “create another 250,000 jobs by getting the EPA out of the way” of natural gas drilling. But the EPA isn’t currently in the way: The very study on which Perry relies assumes that all of those jobs will result if current regulations are not changed.
In a speech at a steel plant in Pittsburgh on Oct. 14, the Texas governor outlined a sweeping plan to create over a million jobs by increasing American energy production. The plan involves opening up numerous areas currently off-limits to oil and gas exploration, and repealing regulations he said are hampering domestic production of fossil fuels.
The full potential for American energy production can only be realized, he said, “if environmental bureaucrats are told to stand down.”
Calling natural gas a “game-changer” in U.S. energy production, Perry cited regulation of hydraulic fracturing as an example of government overreach. Hydraulic fracturing, or “fracking,” is the process of extracting natural gas from underground shale formations. Spurred by technological advancements, the Department of Energy projects shale gas will comprise over 20 percent of the total U.S. gas supply by 2020.
With the Marcellus Shale deposits in the northeast U.S. poised to be the largest producing gas field in the U.S., they have come under intense national focus. Gas companies see huge potential for production and profits and environmentalists worry about damage to drinking water and other environmental impacts. Perry, who is running for the GOP presidential nomination, said development of the Marcellus Shale would be a presidential priority for him.
Perry, Oct. 14: “And right here in Pennsylvania, and across the state line in West Virginia and Ohio, we will tap the full potential of the Marcellus Shale and create another 250, 000 jobs by getting the EPA out of the way.”
According to a footnote in the full energy plan published on Perry’s campaign website, the 250,000 jobs projection comes a study released in July, “The Pennsylvania Marcellus Natural Gas Industry: Status, Economic Impacts, and Future Potential.” The study was funded by the Marcellus Shale Coalition, a trade association that represents gas companies, and performed by three energy professors at the University of Wyoming and Penn State University.
According to the report, if natural gas prices don’t fall significantly, “Marcellus economic activity could support over 250,000 jobs” by 2020.
But current EPA regulations aren’t holding back that potential, as Perry contends.
“We made our projections under current policy in effect,” said one of the study authors, Timothy J. Considine, a former Penn State professor who is now director of the Center for Energy Economics and Public Policy at the University of Wyoming.
The EPA is studying the issue of hydraulic fracturing, but no new regulations have yet been proposed. Due to the expanded use of fracking, Congress in 2010 directed the EPA to study the topic, “to better understand any potential impacts of hydraulic fracturing on drinking water and groundwater.” The EPA’s initial research results aren’t expected until the end of 2012 and the final report is expected to be released in 2014.
“If there are real stringent regulations imposed, I think the governor has a point that it could significantly impact the industry,” Considine said. But it’s premature to speculate on what regulations might be proposed and how they might affect job projections, he said.
But based on some of the questions being raised during the EPA study process, there is at least reason for concern by the gas industry, said one of the study’s co-authors, Robert W. Watson, an emeritus professor at Penn State and chair of the technical advisory board to oil and gas management of the Pennsylvania Department of Environmental Protection. Of particular concern, Watson said, is the possibility of permit requirements regarding diesel equipment used to extract the natural gas. Those could potentially be costly and discourage production, he said.
But again, those are potential regulations that have not been proposed. We take no position on what the EPA should or shouldn’t do, or what the Obama administration will or won’t do on its own. It’s fair game for Perry to say he’d prevent future regulations from being imposed, but he misleads when he says he would clear away impediments that the industry’s own study says don’t currently exist.
– Robert Farley
Read more in Federal Politics
Western debate between Republican presidential nominees grows feisty
Endangered incumbent calls outside spending “an outrage to democracy” |
Businesses can be placed into two kinds of markets: horizontal and vertical. Both are vital for marketing and company-building purposes. Here are some ways to differentiate horizontal markets from vertical markets and understand how you can use them.
Horizontal markets are:
- Defined by a demographic feature that can be common across different kinds of businesses
- Always broader than vertical markets
- Usually cooperative and seeking joint opportunities
- An opportunity to market to a general audience
Vertical markets are:
- A group of businesses that share the same industry
- Always specific and cannot cross industries
- Often competing against each other
- An opportunity to market to a specific audience
Although the two market types are contrasting, businesses can usually be categorized into both horizontal and vertical markets at the same time. For example, a shoe company could market horizontally to the area in which it is located. It could also market vertically to anyone considering a new pair of shoes. A children’s book publishing company can market horizontally to literate people or vertically to children and parents.
Knowing which horizontal and vertical markets your company wants to serve can be helpful to its marketing success. By defining your markets, you can better advertise and serve your markets’ needs, whether generally or specifically.
If you're considering business school, discover the available business degree programs near you.;) |
Free and open markets are the foundation of a vibrant economy. Aggressive competition among sellers in an open marketplace gives consumers — both individuals and businesses — the benefits of lower prices, higher quality products and services, more choices, and greater innovation. The FTC's competition mission is to enforce the rules of the competitive marketplace — the antitrust laws. These laws promote vigorous competition and protect consumers from anticompetitive mergers and business practices. The FTC's Bureau of Competition, working in tandem with the Bureau of Economics, enforces the antitrust laws for the benefit of consumers.
The Bureau of Competition has developed a variety of resources to help explain its work. For an overview of the types of matters investigated by the Bureau, read Competition Counts. This Guide to the Antitrust Laws contains a more in-depth discussion of competition issues for those with specific questions about the antitrust laws. From the menu on the left, you will find Fact Sheets on a variety of competition topics, with examples of cases and Frequently Asked Questions. Within each topic you will find links to more detailed guidance materials developed by the FTC and the U.S. Department of Justice.
For additional information about the work of the Bureau, or to report a suspected antitrust violation, contact us. To learn more about how the Bureau is organized and who to contact with a competition question, consult Inside BC. The Commission cannot represent individuals or businesses, and these resources are not intended to substitute for legal advice. |
This report shows that it is technically feasible for Alberta to eliminate its very heavy coal reliance within 20 years, without simply switching this reliance to another fossil fuel. It acknowledges that natural gas will play an important role in the province’s electricity future, but as one of a wide range of sources. As a critical first step, the province must lower the barriers that limit renewable power.
Many would-be renewable energy developers will not overcome the fundamental project-financing barrier identified in this report unless and until policy helps to mitigate the price uncertainty in the province’s electricity markets.
This will mean introducing more opportunities for long-term power purchase agreements—legal instruments used in jurisdictions across Canada and around the world to define the terms for the long-term sale of electricity. Policy could help incent long-term agreements with creditworthy electricity users and/or retailers through the private sector. A series of recent reports, by KPMG and others, have identified such agreements as the “missing piece” needed to make renewable power work in Alberta and to begin diversifying the electricity system.
The government could turn to a wide variety of instruments to bring power purchase agreements to fruition. Of these, the Clean
Electricity Standard—a more market-oriented variation of the successful renewable portfolio standards in the United States—has attracted considerable attention and support (see sidebar, “The Clean Electricity Standard”). But any number of policy options or take-offs on the above concept could address the existing barriers.
Ultimately, the authors of this report would support any policy options that seek to achieve the following goals for Alberta’s electricity system:
- Level the playing field for renewable energy sources by accounting for the presently hidden pollution and greenhouse gas costs of fossil fuel generation
- Address the major hurdle to financing for renewable energy projects by providing some degree of long-term price certainty for the electricity generated
- Prepare the groundwork and dismantle regulatory barriers for the widespread market penetration of new, clean generation technologies—such as distributed generation and storage technologies that integrate renewable energy into the grid
- Allow renewable energy sources—including distributed generation sources—to fully realize the value of the energy they produce |
from The American Heritage® Dictionary of the English Language, 4th Edition
- n. Intellectual capacity.
- n. People of well-developed mental abilities: a country that doesn't value its brainpower.
from Wiktionary, Creative Commons Attribution/Share-Alike License
- n. Mental ability; intelligence.
- n. Intelligent people considered as a group.
from the GNU version of the Collaborative International Dictionary of English
- n. mental ability; intellectual acuity.
from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.
- n. mental ability
Sorry, no etymologies found.
On paper, Michelle Obama and Hillary Clinton look pretty similar in brainpower:
As we know, the qualities of a good leader are many and brainpower is but one.
It's also the added brainpower, which is why small businesses should be eager to collaborate.
It was tremendous to have that kind of brainpower in one room.
That kind of brainpower is a remarkable achievement, way beyond me, that’s for sure.
Bruce Nussbaum of Business Week writes, the surge in companies going to India, China, and Eastern Europe in search of very cheap brainpower may soon be coming to an end -- far sooner than anyone has anticipated.
The human engaging the cat in ways that prompt the animal to use stalking skills and brainpower is the way to go — allowing Fluffy to be the mighty hunter so emotional, mental and physical needs are met, Johnson-Bennett says.
It's astonishing to me that we can call our country a super power and yet we do not invest as a priority in the brainpower of our little ones who will lead this country one day for us all.
That party should change its name and start from scratch with some brainpower and ideas for today's world, not the one these losers dwell in!
I was a multitasking genius with plenty of surplus brainpower to spread around. |
Caribbean islands situated on continental plate boundaries have shown promising geothermal potential, estimated in the region at 850 megawatt. The chart above compares current installed geothermal capacity to “announced developing capacity,” the estimated power plant capacity reported for a specific site by a private company, government agency or contractor associated with the site.
According to the Geothermal Energy Association, ‘there is enough geothermal potential in the region that these countries could meet their own needs and export the energy they have leftover’ – cites the October report, “The Status of Geothermal Power in Emerging Economies”.
The Caribbean Development Bank (CDB), Japan International Cooperation Agency (JICA) and the Inter- American Development Bank (IDB) recently signed a cooperative agreement to encourage renewable energy in the Caribbean. They plan to diversify energy sources through renewable energy, with an emphasis on geothermal energy.
In recent news, Saint Lucia will receive $2.8 million from the World Bank to begin a Geothermal Resource Development Project. They plan to use the funding for exploration, development and the implementation of a geothermal program in the area. The country is also working to modernize its energy regulatory body to attract developers to develop the energy industry. According to a 2008 study by the University of the West Indies, St. Lucia’s estimated geothermal potential is 680 MW.
In addition, St Vincent and the Grenadines announced in September 2014 plans for a potential small geothermal power plant. Surface exploration indicates high geothermal potential near Mount Soufriere. Companies Emera and Reykjavik Geothermal will provide financial support for exploration in the area while the New Zealand government is helping with technical guidance. Recently, the government was granted a loan of $15 million to help fund the development of a 10-15 MW geothermal project, expected to come on stream by 2018. The concessional loans come from the International Renewable Energy Agency (IRENA) and the Abu Dhabi Fund for Development (ADFD). |
Whether you're an employee or a manager, the importance of communicating well can't be emphasized enough. However, like most skills in life, good communication doesn't come naturally for most people, who must develop the skill through continual practice. Those who have earned a reputation as excellent communicators share several common traits. These qualities, in turn, focus on one major goal: to make sure that all participants in a conversation feel equally heard, respected and understood.
Active listening is essential for effective communication. Instead of interrupting the other person, frontline workers and managers who master this skill focus on what the other person is telling them. Good communicators also understand the value of confirming that they're intently listening, whether it's through nods, brief verbal cues or paraphrasing the other person's statements. These sort of responses keep conversations from turning into monologues, which is crucial to solving a customer's problems or closing a business deal.
Empathy for Others
At many workplaces, communication is frequently associated with a hard-driving language style. However, that approach often leaves workers talking at each other in stiff, artificial ways. Instead, good communicators seek opportunities for collaboration and meaningful dialogue. Such possibilities are more likely to happen when you try to understand the other person's concerns about an issue, rather than forcing your views on him. Effective communicators seek to understand how others feel about a situation.
Employees and managers often interact without saying a word. If you understand this principle, you already know that body language, eye contact and tone of voice send powerful cues in workplace relationships. For example, lack of eye contact or failure to sit upright at a business meeting indicates boredom or disinterest in a speaker's message. Skilled communicators strive to avoid giving off these signals, which show how they regard others. Appropriate eye contact communicates respect and interest.
Without open-mindedness, good communication is unlikely to occur on a regular basis. For example, it's easy to assume that a co-worker who dominates the office conversation flow is showing off. However, such assumptions are also a common source of workplace conflicts. Good communicators avoid being drawn into these situations by asking clarifying questions and finding common interests. Such actions halt the escalation of future conflicts.
Good communicators recognize the value of positive thinking when enthusiasm flags around the conference table. This is no easy feat, since the human brain is better attuned to negative emotions that require more thinking to process. Effective communicators try to offset negative feedback with a couple of positive comments. Positive corporate leaders focus on rallying others around common goals to teach resiliency in tough business climates.
- Forbes: Why Empathy is the Force That Moves Businesses Forward
- TechRepublic: Ten Ways to Communicate More Effectively With Customers and Co-Workers
- The New York Times: Praise Is Fleeting, But Brickbats We Recall
- The Persimmon Group: Nonverbal Communication is Powerful Tool in the Workplace
- U.S. News & World Report: 10 Tips For Tackling the Toughest Workplace Conflicts |
Ongoing; Old City Hall
The Museum’s 1892 Old City Hall building features a variety of exhibits that tell the stories of the building’s architecture, the city’s early days, logging history, and waterfront industry.
Green Gold: Logging the Pacific Northwest
Relive the history of logging in our corner of the Pacific Northwest through photographs, artifacts, and stories documenting both the good and the bad of Bellingham’s timber era during the mid- to late- nineteenth century. Historic video footage takes you back to a time when only the sheer strength of the lumberjacks felled the enormous trees. Learn what it took to be a lumberjack, the long days and hard work. Find out what a “road monkey” and a “river rat” did for their jobs.
Get a sense of place, and where we are in this fourth corner of the country, through an audio-visual journey of Old City Hall and the early days of Bellingham. Located on the main level of Old City Hall, in the gallery that was once the first mayor of Bellingham’s office in the late 1890s, you’ll learn a variety of historical facts and trivia.
Maritime History Gallery
Walk into the second floor Allsop Gallery for a lesson on Bellingham’s maritime heritage. From early steam ships, to fisheries, to notable schooners plying the shores of Bellingham Bay, you’ll get a waterfront history overview through photographs, artifacts, interactives, and model ships while looking through the gallery windows to the Bay. See messages visitors have placed in our “Message in a Bottle.” We hope you’ll visit our Maritime Gallery and leave your own message of encouragement.
John M. Edson Hall of Birds
Partnering with the North Cascades Audubon Society, this exhibit features our founding collection of more than 500 mounted birds, with interpretation, videos, and hands-on activities highlighting Pacific Northwest flyway zones, migration patterns, habitats, nests, and more. |
Acetic anhydride is produced from glacial acetic acid via ketene (CAS no 463-51-4). Acetic acid is vaporized and fed together with a catalyst to a cracking furnace operating under vacuum where ketene is produced together with water at high temperature. The reaction mixture is rapidly cooled to avoid the reversing of the reaction. The condensed dilute acetic acid is separated and the gases then pass through two absorption towers in which they are scrubbed by acetic acid/ acetic anhydride of various concentrations.
Glacial acetic acid is added to the second absorption tower. Ketene is reacting with acetic acid to form acetic anhydride. In following washing towers the gases are further scrubbed before released to the atmosphere. In a distillation column the crude acetic anhydride is distilled and recovered as the bottom product. The top product is acetic acid which is recycled to the process. Acetic Anhydride is raw material for cellulose acetate (fibres, films, plastics, cellulose lacquers), aspirin, agricultural chemicals, fragrances, pharmaceuticals and explosives. |
Fair Labor Standards Act
Most organizations carry out complex processes whereby they might require highly qualified employees, or they would need to train hired employees to fit into assigned roles. The Fair Labor Standards Act is very important because it ensures that employees are not overworked or taken advantage of in any way at the workplace. The United States of America has a Fair Labor Standards Act that has guidelines on specific areas that should be adhered to at the workplace. Such areas include record keeping, minimum wage, child labor, overtime and hours worked (Whittaker, 2003).
The issue of minimum wage has always been contentious. When legislators were coming up with the Act, they were in a dilemma on whether to establish a fixed amount, or fixed rate depending on the cost of living. The FLSA was adjusted to ensure that minimum wage is reflected annually depending on the cost of living. In July 2009, the US federal government fixed the minimum wage to be $ 7.25 per hour. Some particular states have specific minimum wage rates.
In 2009, the overtime rate in the US was fixed to a minimum of one and a half the number of times the minimum wage. Normal working hours per day are expected to be eight. Therefore, overtime is calculated after forty hours in a week have been accumulated. The number of overtime hours can be calculated before 40 hours have been accumulated, but they cannot be less than the forty hours. The FLSA has no guidelines on special days like weekends and holidays. Such guidelines are set by specific organizations. While some organizations work on holidays, others do not require their employees to work. Some unions have specific guidelines on overtime hours and the rates that their members should be paid. The human resources department of an organization should go through union specifications in which their employees are members. Failure to abide by employee related rights can expose an organization to expensive lawsuits. Such cases would also put the organization in bad light and increase the likelihood of high employee turnover.
Employers are required by FLSA guidelines to display the legal rights of employees where they can be easily seen. Such displays are meant to ensure that employers do not take advantage of their employees in terms of hours worked and compensation in terms of money. Employers are also expected to keep accurate records about their employees, such as the time when an employee was hired and signed contracts indicating benefits that employees are entitled to.
The Federal government dictates that employers should not employ minors in their organizations. Minors are supposed to be in school and finish their studies. They might also not be in a position to negotiate for the best possible opportunities that would favor them. Some third world countries experience cases of abusing children rights through exposing minors to poor working conditions and giving them low wages. Under Federal laws, employers found to be employing minors are liable for a jail term of not less than five years.
Distinguish between acceptable and unacceptable employment practices related to drug and genetic testing and other privacy concerns
Drug and genetic testing is conducted in some organizations in the country; both government and private sector institutions. The employers use honesty, productivity and safety as the grounds for carrying out the tests on their employees. The employment practices are considered legal especially in organizations where sobriety and health are important in order to carry out work functions. In organizations where such tests are carried out, they are done to ensure honesty and promote health. Some employers put a car tracker on company cars, which are then monitored. Such monitoring is especially common among sales representatives that cover specific regions in an area. However, the employers should ask for the consent of the employees before conducting the checks (Hazards, 2014).
Most employees consider drug and genetics testing invasive, and would rather not go through them. Some organizations conduct random drug tests while others ask job applicants to take the tests. Most labor unions advocate for support groups of substance abuse instead of conducting drug testing that might be stressful to the point of pushing individuals to taking drugs and alcohol.
Genetic testing has been used in some companies so as to eliminate susceptible employees from an organization. The testing technique has been criticized because it might eliminate individuals from minority groups, such as African Americans, Asians and Hispanics from employment opportunities. Furthermore, the genetic testing results might have little if any impact on the productivity of employees (Hazards Magazine, 2014).
The ethical issues that commonly arise in employment law matters
Laws that govern employees and organizations keep on changing over time. It is therefore prudent for employees to keep on reviewing the laws to ensure that they are complaint. In the recent past, many organizations have had to deal with employment law matters raised at lawsuits were they were the defendants and current or past employees were the plaintiffs. Some of such lawsuits are pegged on gender discrimination at the workplace. The largest numbers of complainants have been women who have been passed on for positions because of their gender. In some organizations, women are not offered certain positions because of the probability that they might get pregnant and therefore have to go for maternity leave. Some employers also believe that women are inferior to men and therefore cannot perform as expected (Walsh, 2012).
Organizations should specify the policies that they hold on gender related issues and abide to them. If an organization states that they offer equal opportunities to both men and women, no record should arise of men being given preference or vice versa. Organizations should also be clear on federal and state laws in regards to age, disability and employment. While it might be legal to turn away a fifty year old individual in some states, it might be illegal in other states to do so. Employers should also not turn away an individual for being severe especially if their papers show that the particular person is qualified. Such employers might expose their organization to a lawsuit especially if the obesity is because of medical grounds that the individual has no control over.
Clear guidelines should also be established and communicated to all members in the organization about sexual harassment. There should also be clear guidelines to be followed on action to be taken when sexual harassment occurs whereby confidentiality is maintained. Employers should be aware of the respective federal and state laws that govern conducting of background checks of potential employees. It is considered unlawful to disqualify an applicant because he/ she has ever been arrested. Employers are also not expected to tie an individual’s race to arrest background as a reason not to hire him/ her (Walsh, 2012).
Legal Compliance and Ethical Behavior in Employment
Monitoring employees is one of the ethical issues that arise when it comes to proper treatment of employees by employers. Some issues that are considered unethical include checking employees’ private emails and phone conversations. It is also considered unethical when employers expose their employees to poor working conditions, such as providing poor lighting or insufficient tools that might expose individuals to health and safety problems. The respect that is portrayed in an organization should be mutual; between the employers and employees. Employers should set a good example in the manner in which they treat their employees. The management of the organization should strive to set high respect standards such that respect develops into a culture, independent of government and other legal regulations that should be part of an organization. Ethical practices in an organization make employees feel appreciated, which increases productivity in an organization.
The management in an organization should ensure that there is an ethical code that acts as a guideline on how employees are treated in the organization. There should also be ethics training for the employers and employees, which is conducted regularly. Employees should not be victimized when they raise concerns over ethical behavior in the organization especially if unacceptable practices are being perpetrated by senior managers in the organization (Collins, 2009). An employee handbook should exist in an organization so that it provides the conduct and expectations that employers have of their employees. Most organizations’ handbooks have information and guidelines on employees’ expected behavior in relation to drug and/or substance abuse. Employees are also issued guidelines on workplace relations and circumstances regarded as sexual harassment and remedies to such cases.
Collins, D. (2009). Essentials of Business Ethics: Creating an Organization of High Integrity and Superior performance. Massachusetts: Wiley.
Hazards Magazine. (2014). Work privacy: Testing times. Web. 14 April 2014. Retrieved from http://www.hazards.org/privacy/
Walsh, D. (2012). Employment Law for Human Resource Practice. Ohio. Cengage Brain.
Whittaker, W.G. (2003). The Fair Labor Standards Act. New York: Novinka. |
Owing to their ease of manufacture and relatively low weight, roof trusses are widely used in construction of industrial facilities. They can be indispensable to cover large building bays without intermediate supports.
Currently, trusses with chords and diagonal elements made from twin angles, as well as trusses made from hollow sections (or, less frequently, round pipes) are the most common. For large spans of buildings, hot-rolled channels or I-beams can be used as chords.
In terms of their plane geometry, trusses are divided into parallel-chord, single-pitch, duo-pitch, and arch trusses.
In terms of their purpose in the structure of the building, trusses are divided into principal and secondary trusses. Secondary trusses are used to connect supporting columns and, in turn, act as a base for principal trusses.
The most commonly, truss elements are connected by semi-automatic or manual arc welding now. |
With the advances and interest in hydraulic fracturing to harvest natural gas from within the Marcellus Shale deposits in the states of Ohio Pennsylvania, New York and West Virginia, many questions have become popular and have earned media attention. There are certainly lots of concerns and even more answers. The reality is that there has not been a great deal of research completed with focus on the repercussions of hydraulic fracturing, particularly associated with the Marcellus Shale. Here are some common concerns:
– Does hydraulic fracturing contaminate the aquifers that supply drinking water?
– How much water is utilized in the hydraulic fracturing process?
– Does hydraulic fracturing cause drinking water to become flammable?
– Do earthquakes result from the hydraulic fracturing process?
Depending on what advocacy group funded any research effort, the answers to these questions will probably be very different.
Perhaps a quick primer on hydraulic fracturing and an understanding of machinery and processes utilized will help shed some light on how forensic engineers can be of support. Hydraulic fracturing, or ‘fraccing’ as it is usually referred, involves drilling several thousand feet into the ground and using a mixture of water, sand and other propping agents, or ‘proppants’ to fracture the surrounding rock formation. Hydraulic fracturing is typically used for horizontal drilling, in which the drilling direction is changed from vertical to horizontal, extending deep within the shale. The purpose of horizontal drilling is to gain access to as much of the shale formation as possible, without having to drill dozens or hundreds of individual vertical wells that will not extend far beyond the end of the vertical drill. Once the driller has reached the desired horizontal distance of drilling, the shale formation is fractured using a large and highly pressurized volume of the fracking fluid. The proppant chemicals in the fracking fluid are designed to hold the fractures in the shale open, releasing the natural gas that is trapped within the shale. This released gas can then be harvested and recovered for commercial use.
A good reference website to learn more about the process of hydraulic fracturing can be found at:http://www.hydraulicfracturing.com/Process/Pages/information.aspx
Below are some of the common terminology associated with the hydraulic fracturing process:
– Aquifer : an underground layer of water-bearing permeable rock or unconsolidated materials (gravel, sand, or silt) from which groundwater can be usefully extracted using a water well.
– Casing: a series of steel/cement seals installed along the path of the well to ensure no leakage of natural gas or proppant.
– High Pressure pumping: the machinery and process utilized to cause fracture shale and release gas.
– Proppant: a material that will keep a induced hydraulic fracture open, during or following a fracturing treatment, while the fracking fluid itself varies in composition depending on the type of fracturing used, and can be gel, foam or slickwater-based.
When you look at the entire system of running a hydraulic fracturing well, you realize the entire process entails everything from design of concrete foundation drill and well pads to trucks that transport supply and waste products to and from the site. Machinery and products such as pumps, wellheads, casing seals, and material handling equipment (MHE) all play their role throughout the process of hydraulic fracturing and are a constant source of potential failures leading to insurance claims and litigation. In general, mechanical engineers investigate the mechanical machinery that is utilized to drill and extract gas from the Marcellus Shale deposits. Civil engineers design foundation structures and the water retention ponds and the roads that carry the heavy water laden tanker trucks. Specifically, foundations, water retention ponds and roadways all have specifications which play a critical role in their design. Usage, wear and tear, and mis-use are usually contributing factors to claims and litigation.
To find out more about how CED mechanical and civil engineers can provide facts and answers to allegations in hydraulic fracturing claims and litigation, call CED. Our engineers are dedicated to staying on top of industry developments. Please call us today at 800.748.4221 to discuss a possible case. |
Fluid Isolation in Pressure Transducers for Corrosion Protection
When the chemicals present in a fluid are very aggressive, and the pressure transducer is not available in materials sufficiently resistant to corrosion, it is often possible to mitigate these conditions with a fluid isolation system. This system can be created from easily available parts and help to achieve a useful service life where rapid damage of the pressure transducer would otherwise occur.
The concept is simple: create a liquid barrier to the corrosive fluid in the tubing leading into the transducer pressure ports. The barrier fluid can be captured in a U tube placed just before the transducer inlet so that it is trapped in place. The barrier fluid transmits the hydraulic pressure to the transducer, but because there is no flow, it acts as a buffer against the corrosive liquids in the rest of the system.
There are a few considerations to keep in mind. The barrier fluid must be compatible with the fluid in the rest of the system or it will represent an undesirable contaminate. The barrier fluid must also be stable and inert so that it does not represent a corrosive threat to the transducer. A thick, viscous mineral oil is often used in systems where brine is present – this will tend to stay in place inside the U tube and is compatible with both the transducer and other fluids in the system. The selection of the best barrier fluid is often a process of experimentation and some knowledge of how it will affect the rest of the system is critical. |
There are few big corporations in the world who spend a great deal of time and money in the interest of the society; however, it is not always just for good public relation (PR).
In many scenarios, these big companies are actually pretty passionate about their causes and many actually are the driving force behind changing hundreds of young lives for the better. Each year big corporations initiates programs and spend millions of dollars in helping to expand the minds and the opportunities available to the next generation of potential tech-whizzes, writers and scientist.
At this time of year, when charitable giving is on many people's minds, it can be useful to take a look at the ways that some of the biggest profound tech companies in the world are investing in education.
Below are list of how some tech companies are helping young minds to learn, get access to technology, and even preparing them for a better future;
IT giant HP conducts a variety of educational programs for young people. And, one of the biggest endeavors is the HP Catalyst initiative, which enables HP to expand its global network of education and develops more effective approaches to science, technology, engineering and math (STEM) education.
The goal is to transform STEM learning and teaching, and to inspire students to use their technical and creative ingenuity to address urgent social challenges in their communities.
Further, HP also offers $40,000 grants to teachers who perform top of the list in the ways of using technology in the classroom. Moreover, the company also sponsors a worldwide summit on educational innovation.
Similarly, Microsoft is one of the recognized corporations in charitable giving. When it comes to support education, it has developed a number of educational initiatives, competitions, and programs to help young minds and teachers over past few years.
Further, they offer educational institution with free or low-cost software and computers. The company also boosts programs that go from elementary school to college and beyond.
Their famous DigiGirlz program helps high school girls a chance to learn about, use, and explore careers in technology. In addition, university and college students can take advantage of mentoring and internship programs, as well as low-cost training to improve career opportunities.
IBM is very well-known company for providing educational programs and donations for a period of time. One of the renowned IBM's education funding has been the Learning Village software, which allows teachers to better use technology in their lessons, tailoring and monitoring it much more effectively.
Further, reinvent Education campaign launch in 1994, which contributed nearly $100 million to educational causes has been the biggest investment in education. Part of this reinvention program includes the Academic Initiative, which helps to train school teachers and professors with technological skills.
Google has largely focused on improving access to computer, engineering, and science education and they have come up with a number of programs, including a worldwide science fair that encourages students to innovate and engineer things well beyond the norm for their grade levels and programming competitions that offer high school and college students the chance to participate in real world development.
They also offer college and university scholarships, and some of their recognized programs include; BOLD Internship program, CS4HSLEAD, Computer Science Summer Institute program, Google FUSE retreat, Computing & Programming Experience, Program for computer science, and the Google Teacher Academy.
Verizon is an American based cellphone giant, which has been famous for their donations to education in USA. They also won the Award for Philanthropy in Public Education from the NEA in 2009.
The award recognized the company's long-term commitment to education and literacy. At the core of the award was the company's Thinkfinity website, a project that has gained the company worldwide recognition. Thinkfinity is an amazing portal for educators, providing free ideas for lessons, professional development, helpful videos, educational tools, and even a social network for teachers.
In addition, the Verizon Foundation also donated millions of dollars to create communities, nationals, and online literacy programs that help young people, including adults to learn to improve their reading, job, and technology skills. |
The title of this article contains a bit of an untruth – every punctuation rule is an important one. Punctuation allows for the creation of clear, concise and compelling writing, which is the key to a successful piece of copy. Without punctuation, we couldn’t separate the independent from the subordinate, emphasise, exclaim, show possession or end a sentence.
The English language is a mélange of various influences, broadly Germanic and Latin but also from various regional dialects. As a result, English doesn’t contain the hard and fast grammatical and punctuation rules that the Romance, Germanic and Slavic languages have. Instead, we are left with a set of rules that almost always have an exception.
Despite this issue, let’s pin down the most critical punctuation rules (or at least an interpretation of them).
The Full Stop
The full stop is the copywriter’s best friend and it should be used liberally. The full stop is used to chop sentences up into snappy little sound bites that are easy-to-digest. Copywriters who neglect their full stop do so at their peril. The best way to judge your full stop use is to think of it as a breath. If you re-read your copy and you get breathless before the end of a sentence, you need to throw a full stop in.
The full stop is also used for abbreviations such as Dr., Mr., Mrs., etc. It is also used for acronyms.
The comma is a tricky punctuation point. Careful proofreaders will often spend hours removing and replacing commas. Commas are used for a multitude of reasons:
- Firstly, they’re used to separate independent clauses, e.g. ‘Her Mum got the job yesterday, so she took them out for dinner.’ In this case, the comma is almost like a half full stop because it is breaking up the sentence without finishing it completely and making it too choppy.
- You should also use a comma to separate an introductory word from the main clause, e.g. ‘Yes, you should rest.’ Yes, however, and well should always be followed by a comma, e.g. ‘Well, I told you to wear your jacket.’
- Commas are very useful when you want to separate an aside from the main sentence, e.g. ‘The Prime Minister, the leader of our country, is voted into office.’ You can test your use of a comma for an aside by removing the aside, if the sentence still makes sense (the Prime Minister is voted into office) then the commas have been used correctly.
- Commas are also used in lists. There are two options for comma use when reeling off a list. The first option is the Oxford Comma, e.g. ‘I drink wine, beer, and spirits.’ The comma after ‘beer’ is the Oxford Comma. Perhaps the most common usage is to remove the comma before the ‘and’, e.g. ‘I drink beer, wine and spirits.’ As long as you are consistent throughout your copy, you can go with either.
- Commas should be used to separate place names (Melbourne, Victoria), dates (except the month and day, e.g. February 2, 2008) and addresses (except street number and name, e,g. 3 Grammar St., Punctville.)
- It also needs to be used to separate text and quotation, e.g. ‘Mr. Glass said, ‘Do your work!’
Quotation marks are another area where there are no hard or fast rules. There is a general rule that double (“) should be used for a quote, and single (‘) should be used when you are quoting someone who is quoting another person. There are so many variations that it’s a case of using either single or double as long as you are consistent throughout your copy. All punctuation should go inside the quotation marks, e.g. ‘The Professor said, ‘Shakespeare was an unabashed story thief.’ Not, ‘The Professor said, ‘Shakespeare was an unabashed story thief’.
Colons should be used to link a complete statement into one or more related sentences or concepts, such as a quote, a list or a comment, e.g. ‘There are three key concepts in this syllabus: the other, hermeneutics, epistemology.’ Colons are also used in titles, such as ‘Brooklyn: A Love Story,’ or, ‘Web Copy: The Do’s and Don’ts.’
The semi-colon is as slippery as the comma. They are also very useful, especially in joining two equally important independent clauses to create compound sentences, e.g. ‘Peter studied for six months; so, he knew the exam back-to-front.’ In this case, both clauses are given equal weight due to the lengthier pause that the semi-colon allows.
Semi-colons are also used to separate items in a list where a comma is already required, e.g. ‘John Smith, CEO; Bull Snipe, CFO.’
There are three types of apostrophes:
- Possessives of nouns, e.g. ‘Fred’s farm,’ which indicates ownership or possession of a noun.
- Contractions, ‘they’re’ or, ‘I’m,’ to indicate the omission of letters when two words are joined.
- To show plurals of letters, ‘ABC’s,’ ‘T’s and C’s.’
The most common mistake that people make with apostrophes is when possessives are confused with the plural, for example ‘two baby’s,’ instead of,’ two babies.’ Or, ‘Freds ideas,’ instead of ‘Fred’s ideas.’
The Dash and The Hyphen
Copywriters underuse the dash; it is a vital tool in adding emphasis to your sentence. It should be used sparingly to maintain its effect, but if used correctly it can add real gravitas to a point, e.g. ‘You don’t need to know anything else except this – you need this product.’ Dashes can also be used to set off a point or thought, in a similar fashion to a parenthesis or a comma, ‘I thought I knew her – I’d known her since I was six – I guess I didn’t know her at all.’ Again, using the dash adds more gravitas than a comma or parenthesis would allow, it makes the aside a vital part of the sentence.
The hyphen is a little more boring than the dash, but it is a vital piece of punctuation. It can be used to create a single-adjective out of two words, e.g. ‘snow-covered’ or ‘sun-drenched.’ It is also used with prefixes such as self (self-assured) and all (all-knowing), as well as between prefixes and a capitalised word (pre-Columbian) and between the prefix and figures or letters (the mid-1820’s). |
PMBOK® Guide - Sixth Edition: 14-Project Quality Planning (RVLS-2870)
Project Quality Management is about the managing of quality for the project. This knowledge area incorporates many of the best practices and approaches of the larger quality management discipline; but only to the extent to which it supports the project. Project Managers are responsible for quality in terms of their project. A Guide to the Project Management Body of Knowledge, (PMBOK® Guide), is a guide to apply quality management best practices to the needs and expectations of your project. Project Quality Planning teaches you to learn and apply this knowledge, so you can keep it in the framework of a project and its management. All the approaches, best practices, tools and techniques, and processes revolve around meeting the quality needs of the project.
Health, Safety and Welfare
Upon completion of this course, the student will be able to… Explain how to apply Project Quality Management knowledge. Describe how Project Quality Management applies in the framework of a project and its management. List the approaches, best practices, tools and techniques, and processes that revolve around meeting the quality needs of a project.
GO HERE TO REGISTER
American Society of Landscape Architects © Copyright 2019 All rights reserved. |
1. What factors contribute to the rapid pace of change in business? Is the pace likely to accelerate or decrease over the next decade? Why?
Human resources, capital, natural resources, entrepreneurship, and technology all play a factor in to a rapid change of pace. They are likely to increase because all of these factors are growing rapidly.
2. What role does entrepreneurship play in the economy? Who stands to gain from the success of individual entrepreneurs? How do other parties benefit?
Entrepreneurship is key to the economy and most economies support it. The entire economy is going to gain from an entrepreneurship starting a business. Because businesses can decrease the workplace environment and not pay as many …show more content…
Some key strategies for attracting are attracting enormous foreign investors such as china and India. Customer satisfaction is very important for this so the businesses can attract more businesses both nationally and internationally. Business took bold steps to lower tariffs.
8. How has the rise of the World Wide Web changed business practices? What are the benefits and drawbacks for business? For consumers?
The world web has changed businesses by providing them with better ways to customize their advertisements. Also it allows them to advertise to a more broad community. The disadvantages are that the website may crash and you have to constantly update it so that your information is reliable.
9. How has the definition of diversity changed over time? Can a diverse workforce help a company compete more effectively? How? Diversity has changed over time from things that related to just gender to religious views, and cultural views. The growing ethnic populations offer more of a profit potential for the firms that are pursuing them
10. How has the global free trade movement impacted business? Who benefits? Why?
A renegotiation of the GATT signed by 125 countries in 1995 took bold steps to lower |
How can electricity markets function more cost-effectively?
The transmission and distribution activities are natural monopolies as the same infrastructure (electricity grid) is used to serve the competitive production and supply activities.
This way, the product (electricity) marketed by the production and supply sector is transported through the electricity network from the production point to the point of consumption. At the same time, electricity is a social commodity and it is unacceptable in modern societies for any citizen not to have access to this commodity.
Given a finite electrical system, for an electricity market to operate efficiently and economically, either under a competitive regime or under a monopoly, three basic principles need to be met: (a) static efficiency, that is available resources to be utilised effectively for the operation of the market (e.g., more effort and less expenditure), (b) public choice, meaning the alignment of participants motivations (producers and suppliers) in the electricity market based on the collective interest and (c) dynamic efficiency, which is the increase in the rate of innovation within the electricity market and improvement in terms of both the service offered to the consumer and the reliability and quality of the product.
As far as the basic principle of static efficiency is concerned, an economically optimal allocation of resources is required so as electricity consumers pay the price of the cost, they burden the electricity system with, based on the principle of cost orientation.
For the basic principle of public choice, given the need to produce a social commodity (electricity), the electricity market can be a monopoly (a public or private enterprise) or a competitive one (with a number of companies).
Regarding which of the two options is more likely to act in the common interest (e.g. satisfaction of the basic principle of static performance), based on international examples, public ownership models is more likely to be able to work in an autonomous way motivated not by the common interest but the interest of its employees, its suppliers, in some cases, even its competitors.
The reason is that public ownership lacks a restriction, that is the incentive for the low-cost functioning of the electricity market. To address this restriction, regulatory intervention is always needed so that the public undertaking (whether in a monopoly model or in a competitive electricity market) can meet the basic principle of static performance.
To meet the third principle of dynamic performance, we need to ensure that there is an appropriate environment for innovation that creates growth for the benefit of consumers. It is known that growth is not created under equilibrium conditions even if they are effective in the short term. Growth is caused by imbalances, that is by innovations.
In addition, detecting, recording and influencing consumer behaviour are activities in which competitive markets are more successful.
In the case of electricity markets this is done by suppliers so that there is continuous improvement in the performance of their product (electricity) in the market.
These are the advantages offered by competitive markets, which meet the above principles and create rapid growth for the benefit of consumers by rewarding innovation.
That is to say, market forces with the participation of alternative producers and suppliers of electricity and the appropriate regulation can operate electricity markets in a cost-effective way for the benefit of consumers by introducing innovative technologies that can transform consumers from being passive participants to active participants through the digitization of the electricity sector and the installation of smart technologies and applications. Andreas Poullikkas is Chairman of the Cyprus Energy Regulatory Authority |
Some jobs that can be done with tower cranes are involved in the shipping and construction industries. In construction, tower cranes lift steel, concrete and other materials used in the development of a large building structure. Tower cranes in shipping are used in lifting large containers and placing them onto ships to be transported. Tower crane operators are essential in these industries because of the large loads that have to be moved.
Tower crane operators must possess intrinsic qualities, as well as mechanical training that helps them perform their tasks both safely and efficiently. In order for an individual to work as a tower crane operator, he has to go through on-the-job training. The training usually lasts three to four years and also includes learning in classroom settings, construction sites and labs. Tower crane operators must also be very meticulous and pay sharp attention to detail, as a wrong move can result in massive damage and put lives at risk.
A high school diploma is also required to be a tower crane operator. Tower crane operators are also required to follow established guidelines on load limits for tower cranes. The median salary for tower crane operators is $48,630 per year. The profession also has also great medical benefits. |
Importer/Exporter Job Description
An importer is a person who brings in goods from abroad, while an exporter is a person who sends goods abroad. Importers and exporters plan, organize and manage the sales and movement of goods from one country to another.
Importers and exporters should be ambitious, enterprising, resourceful and adaptable, and have the ability to identify good business prospects and conduct research to analyze markets and know whether a demand for their products exists. Before importing products from abroad, importers should investigate the reputation of the manufacturers and the reliability of their products.
What does a Importer/Exporter do?
Importers and Exporters may do some or all of the following:
- find and establish business contacts abroad
- conduct research to know whether a demand for their products exist abroad
- find products to buy abroad and negotiate prices with manufacturers/suppliers
- ensure that the importing/exporting products comply with relevant laws
- plan and prepare for the shipments of goods
- negotiate and arrange payments for ocean freight or air freight services
- prepare and compile import/export documents required for customs clearance
- keep up to date with exchange and financial market rates
Where does a Importer/Exporter work?
Importers/exporters may work normal business hours or irregular hours when communicating with clients abroad. They work in offices, but may spend time in warehouses, stores and factories. They may travel abroad to visit companies and source products, or to attend conferences or trade fairs.
What is Required to Become a Importer/Exporter?
There are no educational requirements to become an importer/exporter, but prior knowledge of logistics, commerce and business administration is helpful.
Knowledge, Skills and Attributes
Importers and Exporters need to have:
- integrity and reliability
- good attention to detail
- good listening and communication skills
- good judgment and decision making skills
- planning and organizational skills
- research skills
- record keeping skills
- the ability to work well under pressure.
- the ability to respect and appreciate other cultures
- the ability to take initiative
- the ability to relate to a wide variety of people
- knowledge of the goods they are importing or exporting
- knowledge of import and export procedures and documentations
- knowledge of distribution systems |
"Schedule 40" and "Schedule 80" refer to reference charts published by the American National Standards Institute (ANSI). When speaking about PVC pipe, the different schedules refer to the size, thickness and maximum pressure of the pipe.
Video of the Day
Schedule 40 PVC
Schedule 40 PVC is pipe made of polyvinyl chloride (PVC) that conforms to a set of standards produced by ANSI.
Schedule 80 PVC
Schedule 80 PVC is similar to Schedule 40 PVC, but it is thicker and can withstand a higher PSI.
While schedules 40 and 80 are very common, there are many other schedules for different applications. A Schedule 80 PVC pipe can withstand a higher pressure than a Schedule 40 PVC pipe but not as much pressure as a Schedule 120 PVC pipe.
Different Schedules Used
Smaller schedule pipe tends to be less expensive than larger schedule pipe. Depending on the volume and pressure of material that needs to be moved with the pipe, the different schedules offer a choice in the cost of materials used. If a low-pressure drainage line is being installed, a Schedule 40 PVC pipe would be adequate and more cost effective than a Schedule 120 PVC pipe.
Use caution when working highly pressurized PVC pipes of any schedule. Pressurized PVC can splinter and explode if pressurized beyond the maximum pressure for the schedule and diameter. |
To kick off our Transition Metals series, we’re taking a closer look at the elements that form common alloys.
Those transition metals are Cobalt, Nickel, Iron, Rhodium, Gold, Silver and Copper.
Cobalt is used to make high performance alloys and rechargeable batteries.
Cobalt alloys include Alacrite, Cobalt-chrome, Havar, Megallium, Permendur, Samarium–cobalt magnet, Stellite and Vitallium. These alloys are used in turbine blades for gas turbines and aircraft engines because of their temperature stability.
Their corrosion and wear-resistance make their great of orthopedic implants as well. Cobalt alloys can be combined with high speed steel to make the compound more wear-resistant.
Cobalt is widely used in lithium ion batteries for the new generations of electric cars.
Nickel combines with aluminum and titanium to form dozens of alloys. They are corrosion and heat resistant for use in turbines, power plants, medical applications, nuclear power systems and chemical and petrochemical industries.
Steel and ferrous (iron) alloys are the most common industrial alloys thanks to the abundance of iron-rich rock and their range of properties.
They’re low cost and very strong. Ferrous alloys are used in engineering to construct machinery, automobiles, ships and buildings.
Rhodium is a noble metal in the platinum group. It is usually alloyed with platinum and palladium because of its rarity. The resulting alloys are resistant to corrosion and aggressive chemicals, so they’re used as catalysts in your car’s three-way catalytic converters.
A rhodium alloy can also be used in nuclear reactors to measure neutron flux.
Gold alloys are created to make different colors of gold for jewelry. In addition to jewelry, black-colored gold can be used for electroplating, patination, and in plasma-assisted chemical vapor deposition processes.
Silver alloys with gold to make compounds stronger than either of the metals individually, and compounds more elastic than either metal.
Copper alloys – most famously bronze, tin and brass – are very resistant to corrosion. They are used to construct roofing materials, bullet jackets, electrical tools, hardware, pipes and plumbing.
Learn more about the transition metals on labnotes.chemistrymatters.com. |
Projects require improvisation
The concept of nowadays project management was born in the 1950s in the USA, mainly in the aerospace and defence sector. It was based on Operations Research , which is an analytical approach for solving technical problems and decision-making. Network planning techniques were in the focus for the first decades of our discipline with methodologies such as Program Evaluation and Review Technique PERT and Critical Path Method (CPM).The paradigms of this time: everything can be planned for and someone in the organisation (centralised function) can do this for the people implementing the solution. These paradigms are still prevailing. Project managers or a planning department (project or project management office) try to plan projects based on assumptions, which fail to come true in real live. Project teams try to follow the plans while implementing the project, struggle with the dynamic changes in their context and often blame this context (or the circumstances) for not achieving what the project is aiming at.
As a matter of fact, we are confronted with a dynamic context, disruptive changes will happen more often and plans are sometimes not worth the time we need for developing them. This is especially true for special types of projects, such as organisational change projects, software development projects and innovations. The role of a plan is rather indicative, providing high-level guidance, to be filled by the project manager and teams with information based on the specific situation they are working in. Thus, improvisation plays a decisive role for project management in dynamic contexts.
What is improvisation? Wikipedia defines it as “the process of devising a solution to a requirement by making-do, despite absence of resources that might be expected to produce a solution”. Most of the references of improvisation point to the arts, e.g. improvisation theatre or improvisation for music and dancing. Improvisation tries to go beyond all rules, regulations or “normal” patterns, it varies them (e.g. tone, pace and rhythms in music) by using a maximum of creativity. Improvisation builds on all human senses, emotions and resources available at a time. It is embedded in the situation and the context. Thus, a person need to sense what the situation and the context is like, what he or she can make out of the situation and context, which options are available and (randomly) chose the appropriate. The choice is not based on a plan with its assumptions or another person telling what to do. It is rather a choice that´s building on experiences (past and present), and a sense of what options are helpful for moving forward. Improvisation also requires a person to be self-confident, self-organising and self-reflective. It means to trust your own skills and abilities, using them in the context given to (re-)act.
Often I hear, that improvisation is the opposite of project management. The latter is something planned, rational, structured and organised, whereas improvisation is chaotic, unpredictable, disorganised and non-scientific. Yes, it is something that we need to re-discover in order to cope with the challenges of nowadays projects. It requires us to make use of all our senses to identify the “weak signals” and make use of them in projects. For example, the atmosphere in a project team may cause you to restructure it to avoid future conflicts. Or a systems test may cause you to look deeper into the design of the system to prevent it of failing in the real implementation (see the failure of the baggage handling system) of We still need plans, but higher level plans, that are tailored or detailed by the people dealing with the context to support them in moving forward, but not overloading them with a theoretical construct that fails to work in the real situation. Trainings for project managers should use cases, simulations or real projects (e.g. humanitarian projects) to sharpen their sensors and build the skills needed to deal with nowadays complexity of our projects. More essential, organisations need to unleash the potential of the people working in the context of a project. Those people understand the challenges and potential solutions much better than a centralized (planning) department or a person at the top of a hierarchical ladder. So let´s reposition “improvisation” as something positive for a project manager to do, it should definitely be in the repertoire of a virtuous project manager! |
Do you know what accountants really do?
Maybe you have a vague idea. They handle taxes, track where an organization’s money goes, work in business… right? That’s all true. But there’s more to it than that – a lot more.
Accountants have founded some of America’s biggest companies. Many own small businesses in your hometown. They also run nonprofits. Some even teach.
An accounting degree opens doors to great careers — no matter your ambitions.
Common early career tracks
After earning a Master of Accounting (MAC) degree, most graduates go to work in public accounting firms, often focusing on one of three specialties: auditing, taxes or advisory work.
Corporate accounting is also a common career path, often after a stint in public accounting and, sometimes, right after graduating.
But what do accountants actually do in these jobs?
Auditing – more than numbers
Auditors are hired by companies to ensure their financial statements are accurate. They must understand the ins and outs of financial accounting and also know how to spot errors and deception.
New accountants often start out as staff auditors and might move up to senior auditor or audit manager positions as they gain more experience.
Auditors need more than just accounting skills. Good auditors are also diplomats, skilled at working with people whose routines are interrupted during an audit.
Sophia Woo (BSBA ’10, MAC ’11) started her career as an auditor at PwC. The skills she developed as an auditor were helpful when she transitioned into a less structured role as an entrepreneur, in which she deals with customers and vendors on a daily basis.
“You’re there at the client site doing an audit. They’re not happy you’re there, but they’re paying you,” she says. “Being able to balance all those relationships and keep people happy was one of the biggest things I was able to learn.”
For Woo, those skills were helpful when she transitioned into a less structured role as an entrepreneur, where she has to deal with customers, vendors and others on a daily basis.
Tax roles open up new options
Tax accountants focus on individual and business taxes, including state and federal income taxes, sales taxes and many other government levies. They also help large companies navigate the tax complexities of operating in multiple countries.
Entry-level accountants might start out as tax staff, but with experience they move on to become tax managers. In corporate settings, tax accountants can take on senior roles such as vice president of tax or manager of global tax.
Tax know-how is critical, but successful accountants are also great at working in teams and managing others. Mastering those skills can lead to big opportunities.
Nathan Andrews (BSBA ’93, MAC ’93) worked in tax at Deloitte, one of the Big Four public accounting firms. Several years ago he started a new business within the company – a tax management consulting unit that helps companies manage tax requirements in different jurisdictions across the U.S. and around the world.
“I was basically given the opportunity to create a business [which has grown] from two people to over 300,” he says.
Accountants who go into advisory work function as consultants, helping companies solve a variety of business challenges — many are not pure accounting problems but can be better understood or solved when accountants put their analytical skills and business know-how to work.
“Much of what we do in advisory is not necessarily accounting related,” says Scott Rosenbaum (MAC ’13), who worked as an intelligence analyst focused on terrorism for several years before returning to school for his MAC degree. “UNC students are uniquely positioned to capitalize on their non-accounting backgrounds. We bring something new to the table because our education and previous work history is not accounting focused.”
Corporate accountants work for – you guessed it – corporations as opposed to public accounting firms.
Accounting is critical to any business – especially for large companies – meaning there are lots of career opportunities for accountants in an organization, from entry-level financial analysts and cost accountants to senior roles such as corporate treasurer, controller, chief accounting officer and chief financial officer. Anne Lloyd (BSBA ’83), executive vice president and chief financial officer at Martin Marietta Materials – a multibillion-dollar publicly traded company – worked at a public accounting firm early in her career before switching to corporate employment.
“UNC provided the foundation and gave me the confidence to begin to build the house on top of that foundation,” she says. “As you add more and more floors to the house over time, you realize and appreciate more and more how vital a strong foundation is to the structural integrity.”
Lots of options
Though public accounting is a common early career choice for accountants, their options are virtually unlimited.
Accountants are represented in nearly every field. The No. 3 official at the FBI is a CPA who started his career in public accounting and then became an FBI agent. There are even several CPAs serving in Congress. UNC MAC graduates have c-suite jobs at many top companies, including entertainment businesses in Los Angeles and with the biggest names in technology.
The bottom line: no matter what your ultimate career aspirations are, the business know-how and skills you gain from a Master of Accounting degree will get you there faster.
It’s time for some unique roles
To this point, we’ve covered the basics in tax, audit, advisory and corporate roles. There are, however, numerous unexpected roles where accounting fundamentals are put to work in truly unique ways. Download our “Creative” whitepaper to explore some of them.
Considering accounting as a career?
Your first step is a Master of Accounting (MAC) degree.
UNC Kenan-Flagler offers a one-year, Top 10 MAC in two formats:
1. On-campus for non-accounting majors
2. Online for both accounting and non-accounting majors |
WASHINGTON, July 23, 2018 – (RealEstateRama) — A new publication is now available to offer buildings professionals a practical look at the future of the energy sector and the role of buildings.
The resource, Building Our New Energy Future, is a primer tailored to prepare buildings professionals for the challenges and opportunities of designing efficient and grid-responsive buildings within the changing energy sector. The primer was developed for ASHRAE, in collaboration with the American Institute of Architects (AIA), the National Institute of Building Sciences (NIBS) and the U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL).
In 2015, NIBS collaborated with DOE to develop a common definition for what it means to be a zero energy building. That effort, which brought together a diverse set of building stakeholders, contributed to this publication.
“This primer was developed by a team of buildings experts across industries to provide a common language understanding of key topics that will affect our energy future,” said ASHRAE President Sheila J. Hayter, PE, Fellow ASHRAE. “Our new energy future has many exciting opportunities and challenges and this document provides guidance to help buildings professionals not only become more aware of the steps needed to move our energy future forward, but also shows them how they can become involved to ensure our new energy future serves all humanity and promotes a sustainable world.”
The primer explores resources on topics such as distributed energy resources (DERs), electric vehicles and buildings, the Internet of Things (IoT), smart grids and buildings, the future of utilities and high-performance building design.
Strategies to communicate about prioritizing loads, storing versus using energy, and advances in renewable energy are provided throughout the primer. It also shares how new practice areas and business opportunities for building professionals will emerge in this drive toward a more sustainable built environment.
“It is vital that decision makers understand how the nation’s electrical grid works when they are taking steps to reduce energy usage—whether it’s at the building or community level—or they could unknowingly cause the reverse result,” said NIBS President Henry L. Green, Hon. AIA. “Building Our New Energy Future clearly explains this complicated topic so people recognize the impacts their energy choices have on the power grid, occupant comfort level and their bottom line.”
Building Our New Energy Future offers a variety of resources and practical advice to help buildings professionals dissect the intricacies of the evolving energy future, including:
- Facts behind the changes in grid infrastructure, utility business models and building load management
- Practical advice for owners, designers and construction professionals on smart-grid integration
- A look at how to control loads and manage them in a way that will impact the electrical system infrastructure’s performance
- Information on how the technology sector is already engaged in building automation and controls, and renewable generation and energy storage
- Project examples of renewable electricity generation of renewable energy sources with customer-sited energy storage, rooftop solar PV, Solar Integration Systems (SIS) and off-grid solar plus storage microgrid systems
“With the publication of Building Our New Energy Future, ASHRAE has articulated a vision for the future of the electrical grid, providing architects with essential information that defines the symbiotic relationship between high-performance building design, on-site renewable energy and energy management systems and off-site energy solutions,” said Carl Elefante, AIA President.
The WBDG Whole Building Design Guide® hosted by NIBS offers expanded resource pages on many of these subjects. |
Wondering What the U.S. Air Force’s Secretive Spaceplane Can Do? History Offers Clues
Dyna Soar concept preceded X-37B by 50 years
On Oct. 17, one of two Boeing X-37B robotic spaceplanes in existence landed at Vandenberg Air Force Base in California after spending a record 675 days in orbit.
The U.S. Air Force has remained quite tight lipped about just what the small unmanned spaceplane was doing up there. In all likelihood, the payload and mission were both experimental, but what those experiments might have been remains the topic of speculation.
But strictly speaking, the X-37B is not the first design of its type. Boeing’s X-20 Dyna Soar proposal, now more than 50 years old, offers some clues as to what today’s X-37B could do.
The X-20 was the first serious attempt to build a spaceplane — and its roots stretch back all the way to the 1940s.
In 1948, the Bell Aircraft company began development of the manned intercontinental Bomber Missile, or BoMi, as a way to deliver a nuclear weapon into the heart of the Soviet Union with speed and some measure of precision.
It was a multistage reusable rocket vehicle, manned for the simple reason that unmanned systems were not considered capable of the accuracy needed. The development of the Atlas ICBM put an end to that belief … but not an end to the Air Force’s interest in a manned spaceplane.
In November 1959, Boeing began the laborious task of turning a paper design for a manned spaceplane, the Dyna Soar, into something that could fly. After two years of development and constant revision, in December of 1961 the Dyna Soar reached its ultimate form with the Model 844–2050E.
The Model 844–2050E was a flat-bottomed delta-winged configuration with wingtip fins and a distinct fuselage. Relatively small with a span of only 250 inches, it had a length of 424 inches. At liftoff the Dyna Soar weighed 11,390 pounds.
The Dyna Soar glider proper was not equipped with propulsion systems beyond reaction control jets for attitude control in space. The Dyna Soar, like the X-37B, would land unpowered.
The baseline launch vehicle for the Dyna Soar was the Titan IIIC. This booster could loft the spaceplane onto a once-around sub-orbital flight, from Florida’s Cape Canaveral to Edwards Air Force Base in California.
The Titan IIIC could also send the Dyna Soar into a certifiable orbit for a three-orbit mission—orbital altitude of 600,000 feet—and, it was planned, onto much higher, much longer orbits.
To provide on-orbit maneuver capability, the Dyna Soar would be connected via a “transition section” to a Martin Co. Transstage. This reliable upper stage survived the Dyne Soar program and became a standard feature on the Titan IIIc launch vehicle.
The purpose of using a lifting vehicle was two-fold—first was to gain cross-range capability. The lift capabilities of the Dyna Soar meant that it had a cross range of nearly 2,000 miles and a down-range capability of nearly 4,000 miles, giving it wide latitude in where and when to begin the de-orbit burn.
The second reason for lifting re-entry was that the deceleration could be greatly reduced and stretched over a longer period of time. This, it was thought, would make space travel easier on both the vehicles and the crew, allowing space to become a more “operational” environment.
Even though the Dyna Soar was granted the X-20 designation, it was never intended to be an entirely experimental craft; instead, the X-20 (suborbital flights) and X-20A (orbital flights) were meant to build a database that would allow the construction of operational Dyna Soar aerospacecraft.
Boeing designers presented the “Standard Glider” in mid-1963. Externally identical to the X-20, the interior was changed to permit payloads and expected improvements.
The monopropellant RCS thrusters were replaced with bipropellant jets burning N2O4 and Aerozine 50, the pilot’s instrument panel was re-arranged to let him interact with the payload and the equipment in the secondary power bay was re-arranged for compactness—with much of the power generation capability, such as the LH2 tank, transferred into the transition section.
Most importantly, the equipment bay was modified into a true cargo bay capable of seating up to four passengers. The topside access panel could be replaced with power actuated doors, much like on the later—and now retired—Space Shuttle.
Boeing documentation illustrated a number of payloads for the operational Dyna Soar.
The designers proposed two types of Dyna Soar bombers—the pre-emptive and second strike vehicles. The pre-emptive strike vehicle would carry a crew of three, with two hydrogen bombs strapped alongside the Transtage under aerodynamic fairings.
A large number of these bombers—30 or more—would wait in hardened silos near Vandenberg, ready for instant launch. They would launch on a southerly trajectory that would orbit them over targets in Russia or China.
More or less evenly spaced, they would provide minimal response time from weapons commit to impact—just two minutes.
The bomber would have a crew of three—a pilot-commander and two weapons monitors. Their mission would last around 24 hours, assuming it wasn’t interrupted by an actual nuclear apocalypse.
Additional equipment in the form of electro-optical sensors and a radar altimeter would deploy from the aft “ramp” above the secondary power bay. A secure two-way communications system would also be vitally important.
The “second strike” capability was quite different. Eleven Dyna Soars and their launch vehicles would form a single unit. Ten would be unmanned bombers with the 11th being a manned control vehicle.
Having launched into a storage orbit 100 nautical miles high, no further communications from the ground would be necessary. With a storage-orbit inclination of 28.5 degrees, the weapons would not overfly the Soviet Union. But if national authorities called them down, the vehicles’ cross-range capability was enough to allow them to reach targets as far north as 75 degrees.
The bombers would stay in orbit for 12 weeks. If Armageddon didn’t come, controllers could recall them to land at an Air Force base for recovery, refurbishment and relaunch.
Left unreported is the expected on-orbit mission time for the crewed command vehicle. Presumably, as with other Dyna Soar vehicles, it would have had a mission time of around 24 hours. Most likely the orbiting bombers would be left unattended, with command vehicles joining them only in times of crisis.
Each unmanned Dyna Soar bomber had a single 20-megaton hydrogen bomb, a 5,000-pound thrust turbojet and its fuel. On command, the bomber would re-enter, drop to subsonic speed and start the turbojet. After that it would fly the last 250 nautical miles at low altitude using terrain-mapping radar for guidance to within 400 feet of the target.
They were essentially cruise missiles that dropped down from space.
A version meant for high-altitude detonation replaced the jet engine with a 40-megaton bomb.
Slightly friendlier were several designs for satellite inspector and interceptor craft equipped with sensors and, for certain missions, “negation systems.”
The primary mission was to put the satellite inspector in the same orbit as the target, rendezvous with the target, scan the target with a multitude of sensors, record the data on tape and return the pilot, glider, sensors and data tape to the ground.
For some missions, the pilot would be required to make a judgment regarding the intent of the target satellite, and, if in his judgment it poses a threat, destroy that satellite.
Several slightly different payloads were proposed for satellite inspection. One concept from late 1963 called for a two-man crew, with the second crewman sitting in the unpressurized cargo bay and having the ability to swap places with the pilot. The inspection gear was located mostly in the aft boat tail, and would extend once in orbit and retract prior to re-entry.
This concept had a considerable store of expendables located within the transition section, and provided for a 14-day inspection mission. Little information is available on this concept, but it appears that the two crew were to have to stay within their suits for the entire length of the mission, as the vehicle would be unpressurized and the crew, especially the backseater, exposed directly to space.
Another series of concepts from mid-1963, and apparently the designs the Air Force selected for operational duty, called for a single-man satellite inspector.
The first design called for simple inspection. The inspection sensors were located on a turret that would extend from the cargo bay; this was separate from the cockpit, allowing a pressurized atmosphere. In any event, the mission duration was reduced to a more comfortable 16 hours.
Sensors included a targeting radar, cameras, electronic signals-interception gear and an infrared tracker.
The next design was very similar to the first, with the exception that a 48-inch dish replaced the smaller terminal guidance dish of the earlier model. Due to the larger dish there was a repositioning of the turret sensors and antennae, but everything else was much the same. Mission time was again 16 hours.
An interesting note is that this radar was meant to allow tracking of target satellites with radar cross-sections as low as 0.1 square meters. Stealth satellites were a concern even then.
The third design was a far more aggressive vehicle. The same sensor suite that the previous design had was provided here, but with the addition of “negation provisions.”
The Dyna Soar inspector would be backed off from the target satellite if the pilot judged it to be a threat. With the addition of a nuclear radiation detector and a mass measurement system, the pilot could determine whether the satellite in question was equipped with a nuclear power source or warhead. Mission time was up to 24 hours.
If the crew judged the target satellite to be an unhardened, non-nuclear target, the negation system to be used sounds quaint—a rifle. An AR-15, .223-caliber automatic rifle, to be exact.
The pilot wasn’t to simply roll the window down and take potshots at the satellite. Instead, the AR-15 was mounted to the sensor turret. The crew could fire it at the satellite from a range of 100 to 200 feet to damage solar panels, rocket motors and other delicate structures.
For hardened or nuclear targets, the craft could launch with three to six spin-stabilized, infrared-guided rockets weighing 370 pounds each. The rockets would have to hit a four-square-meter target from a standoff distance of up to five nautical miles, which the designers judged sufficient to protect the pilot from a nuclear detonation.
If the crew fired all the missiles and there was still uncertainty about whether the target was sufficiently damaged, the inspector would move back in to within a few hundred feet, inspect the target and if need be open fire with the rifle.
Almost as old a requirement as the manned rocket bomber was an equivalent reconnaissance platform. It would have the best of both worlds when compared the aircraft and satellite recon systems. It would have the immediacy of aircraft systems, while being as invulnerable and far-ranging as a satellite.
The basic reconnaissance variant of the Dyna Soar was a one-man multi-sensor platform. So many sensors were included that engineers had to cram many of them in the expendable transition section.
A large optical telescope/camera fit in the cargo bay. The basic sensor package included high-resolution cameras and radar. The biggest camera had a 105-inch focal length.
The camera system came with 1,000 feet of film, weighed 1,000 pounds and boasted a theoretical resolution of one foot, which would be quite competitive with modern satellite reconnaissance systems.
From Cape Canaveral, the Titan IIIC would have been able to launch the recon Dyna Soar into a 58-degree inclination, 70 nautical mile high circular orbit. The transtage would be able to provide another 1,400 feet per second while on orbit which could increase inclination to 61.2 degres.
Mission duration would be 24 hours, after which time the Dyna Soar would return with its payload of film and taped data. From Vandenberg, the Titan IIIC would launch the Dyna Soar into orbits with inclinations between 34.6 and 90 degrees.
After years of struggle and achievement, Secretary of Defense Robert McNamara cancelled Dyna Soar on Dec. 10, 1963. You can’t deny that the Dyna Soar program was an expensive one, nor can you argue that there were missions that Dyna Soar, and only Dyna Soar, could fulfill.
For every task anyone proposed for Dyna Soar, there was another, cheaper way of doing it. All of them could be made smaller, lighter, cheaper and at lower technical risk than Dyna Soar, and launched on smaller boosters.
While a vast amount of work was carried out during the existence of the Dyna Soar program—11 million man-hours of engineering, 14,000 hours of wind-tunnel testing, 9,000 hours in simulators—a definite mission for which the Dyna Soar proved demonstrably superior failed to materialize.
As of the stop-work order, construction was well underway on the first Dyna Soar. At that time, the Air Force had released 49 percent of the require production orders—14,660 of them. The first spaceframe was 40-percent complete.
Fifty years after Dyna Soar ended, the X-37B was orbiting Earth on its third mystery mission.
The X-37B is itself the end result of a development program longer than that of the Dyna Soar. The earliest recognizable antecedent of the X-37B was the Rockwell International “REFLY,” a small REusable FLYback satellite.
REFLY would go into orbit atop a Pegasus booster and would, as with the X-37B, provide power, maneuver and return capability for a payload meant to spend a short time in space. Rockwell submitted a patent application for REFLY in 1993, making the concept at least 21 years old at this point.
After Boeing acquired Rockwell, work on the REFLY continued, leading through the Military Space Plane and Space Maneuver Vehicle programs to the X-40 and finally the X-37B Orbital Test Vehicles. The configuration changed surprisingly little.
NASA chose Boeing to develop a reusable spaceplane in 1999. However, in 2004 NASA transferred the program to the Defense Advanced Research Projects Agency, which not only continued development but also clapped the cloak of classification over it.
While the configuration of the X-37B bears virtually no resemblance to that of the X-20, the cargo bay on the X-37B is larger than that of the X-20. It’s roughly the same cross section, and perhaps half again as long.
Fifty years of advancement in automation means that many missions like those planned for the Dyna Soar would now not need a crewman on-hand. The X-37B also benefits from advances in materials, both structural and thermal.
Where the Dyna Soar was built out of very dense nickel superalloys, the X-37B has a graphite/polymer composite main structure. The Dyna Soar had heat shielding that permitted massive heat loads to penetrate into the vehicle; the X-37B has a Shuttle-like cladding of silica ceramic tiles, effectively keeping the heat out.
The X-37B has a much lighter internal structure, with less need for the cooling systems needed around the Dyna Soar’s cockpit, cargo and equipment bays.
One important difference between the Dyna Soar and the X-37B is the inclusion of primary propulsion in the more recent vehicle. The Dyna Soar glider relied upon the Transtage for orbital maneuvers. The X-37B uses storable bipropellants for both reaction control thrusters and a single main engine in the tail.
The X-37B also has reaction wheels which use rotational momentum to provide attitude control without the expenditure of propellant.
The Dyna Soar used hydrogen/oxygen fuel cells to provide on-board electrical power. While this has proven a successful system on Apollo and the Shuttles, it does limit total energy. Once the cryogenic liquid hydrogen and liquid oxygen are consumed, the ship is out of power.
This was not a major problem for the Dyna Soar, as it was practical to provide enough consumables to outlast the crew who could hardly be expected to remain in their small craft for more than a few weeks. But for the robotic X-37B, far longer missions are—obviously—possible.
Thus a power source that won’t quickly deplete was provided in the form of a solar cell array that deploys from the cargo bay. Radiators are integrated into the payload bay doors, similar to the Space Shuttle.
The X-37B provides similar mission capabilities to the Dyna Soar, but seems to be a substantial improvement upon the earlier design in nearly all respects.
While the X-37B improves on Dyna Soar capabilities in most respects, there is one area in which it has not—what is its actual mission? The Dyna Soar died not because the technology wasn’t there, but because the mission wasn’t.
The one advantage that Dyna Soar offered was the ability to transport crew up and down, but crew are no longer necessary for the bulk of the missions that the military assigned to Dyna Soar.
And the question remains for the X-37B. What payload can it take up and provide for that would make sense to return safely to Earth? It’s generally assumed that the X-37B carries reconnaissance equipment. But all data collected by modern recon systems is transmitted digitally; so what actually needs to come back?
The X-37B could certainly carry out the offensive missions the military once planned for the Dyna Soar. The sensors and weapons needed would be much smaller today than in Dyna Soar’s day.
Additionally, various sources have suggested that the X-37B could carry weapons such as nuclear-armed re-entry vehicles and kinetic energy rods. The weapons could remain on orbit for an extended period and, if not used, return to the surface. This would, of course, be a treaty violation.
The X-20 prototype was left unfinished on the factory floor while at least two X-37Bs have not only been built but launched, even though the same questions of military utility remain unanswered. At least publicly.
Clearly the X-37B program succeeded on the political battleground where Dyna Soar ultimately failed.
Scott Lowther is the publisher of Aerospace Projects Review. The Model 844–2050E is the major feature of the latest issue of APR. |
The results of a series of vibration monitoring studies on rolling bearings in wind turbine gearboxes and generators have been published by Schaeffler UK.
Machine vibration comes from many sources, and even small amplitudes can have a severe effect on overall machine vibration. Each source of vibration will have its own characteristic frequencies, which can be manifested as a discrete frequency or as a sum and/or difference frequency.
Vibration monitoring relies on the characteristic vibration signatures which rolling bearings exhibit as the rolling surfaces degrade. The technique is well established for monitoring the mechanical condition of wind turbine drivetrains, although signals from this type of equipment can be complex and difficult to analyse (M&E September/October, p13).
Vibration monitoring can also be used to assess the condition of drivetrain components prior to installation. Over a number of years, Schaeffler UK conducted a series of in-depth vibration monitoring studies on wind turbine gearboxes and generators, prior to installation on the turbine.
Wind turbine gearbox study
Rolling element bearings are manufactured to high accuracy, and great care is taken over the geometrical accuracy, form and surface finish of the rolling surfaces. It is important, therefore, that associated components such as shafts, housings, spacers and so on are all made to these high standards.
In addition, assembling the bearings and associated components in a clean and controlled environment with the correct tools is critical. Assembling large gearboxes is a skilled task and it is not uncommon to find that some damage has been caused to the bearing rolling surfaces during the assembly process.
Damage is easy to introduce, but detecting it is almost impossible without conducting some form of operational test. This often takes the form of running the gearbox on a purpose-built test stand under a range of operating conditions. In some cases, only operating temperatures are measured, but often this is not sufficient to detect damage to the bearing rolling surfaces.
Vibration measurements obtained from various positions on the gearbox, such as the input shaft, intermediate shaft and output shaft, are often the best approach to detect damage.
An example of such a vibration measurement is shown in Figure 1. As part of Schaeffler UK’s studies, a 1.2MW gearbox was run at 1,500rpm on a test stand and vibration measurements were obtained at various positions on the gearbox housing. The vibration spectrum obtained from the housing close to the high speed shaft is shown in Figure 1.
The calculated BPFI (ball pass frequency of the inner race) for the type NU228 cylindrical roller bearing on the high speed shaft was 271.26Hz. Present in the spectrum is a large amplitude vibration at 270.64Hz, which matches very closely with the calculated frequency. Either side of the vibration at 270.64Hz are a few sidebands at shaft rotational speed (fs = 25Hz). In the envelope spectrum, Figure 30(b), the BPFI is also evident at 272.50Hz, along with the third harmonic (817.52Hz).
This indicates some damage may be present on the inner ring raceway; the absence of any significant harmonics of the BPFI suggests that the damage is fairly localised. This is further supported by the impulsive nature of the time signal, Figure 2(a), showing impulses at the output rotational speed (40ms or 25Hz).
Figure 2(b) shows the expanded time signal where, during one revolution of the inner ring, the contact of the roller with the defect is clearly visible (~3.52-3.9ms).
The gearbox was dismantled and examined and a localised fault was found on the inner ring raceway of the cylindrical roller bearing, Figure 3.
This damage occurred during the assembly process, the most likely cause being misalignment between the inner ring and outer ring/rollers as the inner ring-shaft and outer ring-housing were aligned and assembled together.
It would have gone undetected without the vibration measurements, resulting in shortened service life and premature failure of the gearbox. In this case, the value of a detailed vibration analysis is obvious.
For a copy of the full report Vibration monitoring of rolling bearings to maximise asset reliability, contact Schaeffler UK’s communications & marketing department on [email protected] |
In the world of valves, no “one solution fits all” approach exists when it comes to sealing technology. Selection depends on a myriad of factors such as the media to which the seals are exposed, temperature, pressure and the leakage tightness required. In a process plant, several failure modes for valves are possible, including bonnet and flange leaks and leaks through the seat, but 77 percent of them are caused by stem packing leaks.1 Therefore, valve sealing solutions must be tailored to meet the specific application requirements. The solutions can range from selecting the right valve type and customizing bonnet design to installing the valve in a specific orientation.
Valve seal selection criteria
Fugitive emission service – “Fugitive emission” can be defined as a chemical or mixture of chemicals, in any physical form, which represents an unanticipated or spurious leak from equipment on an industrial site and can be broadly classified as a volatile organic compound and hazardous air pollutant. The American Petroleum Institute‘s API 622 is the type test standard for qualifying process valve packing for fugitive emissions. Many valve type test standards exist for fugitive emissions such as API 624, API 641 and ISO 15848-1. These standards specify the performance criteria for the valve qualified such as number of mechanical cycles, thermal cycles, temperature and pressure and leakage tightness achieved.
Hydrogen service – Hydrogen is an unstabilized and combustible gas often used in combination with other hydrocarbons. Any media containing hydrogen gas with a partial pressure of 7 Bar and above will be considered as hydrogen service. Because hydrogen is very permeable, hybrid packings are preferred with live-loading. Reducing leakage by impregnating graphite with polytetrafluoroethylene (PTFE) is unacceptable since PTFE can evaporate in a fire with disastrous results.
Steam service – Valves used in combined cycle and supercritical power plants pose unique challenges to the stem seal selection. Combined cycle power plants involve high temperatures, high pressures, high Delta Ps and frequent “cycling” of valves because of load variations for plants that are not purely base-load plants. In coal-based power plants, plants operating above 24 millipascals (mPa) (3,480 psi)/593°C (1,100°F) are regarded as ultra-supercritical (USC), and those operating below 24 mPa (3,480 psi) as subcritical.2 For these applications, graphite packings without PTFE and binders are most suitable. These graphite packings could also have an oxidation inhibitor and, optionally, be live-loaded to compensate for frequent thermal cycling.
Oxygen service – When designing a valve for oxygen service, it is important to identify potential sources of ignition and the factors that aggravate propagation because all three elements – oxygen, fuel and heat (source of ignition) – are required to start and propagate a fire. As nonmetals form an important part in the kindling chain in an oxygen system, gland packings and other nonmetallic seals are chosen for design temperature-pressure conditions based on tests conducted for autogenous ignition temperature, aging resistance, ignition sensitivity to gaseous oxygen and liquid oxygen impacts.
Temperature & pressure
As per API RP 615,3 high-temperature service is typically defined as a service with a temperature higher than 205°C (400°F) for soft-seated valves and 400°C (750°F) for metal-seated valves. Low-temperature service is generally defined in the process industry as services that range from -196°C (-320°F) to -30°C (-21°F). These services include liquefied natural gas liquefaction and gasification, natural gas liquid production and ethylene production. While temperature determines the thresholds for the use of nonmetals/metals as seals, pressure determines the permeability and extrusion behavior of seals over time. Graphite packings are reinforced with Inconel to give the blowout proof strength.
External leakage tightness can either be expressed in parts per million (ppm) or flow rate. Different standards specify leak tightness for valves. API 624 and API 641 are standards for 100 ppm fugitive emissions for rising and rotating stem valves, respectively. ISO 15848 has methane and helium leakage classes and can be extended to isolation and control valves. Valve stem seal selection depends on whether the intended leakage class is Class AH, BH or CH. In a rising stem valve, Class AH would require a metal bellows while Class BH would require a live-loaded low fugitive emissions packing.
Customized bonnet designs
Extended bonnet design
For lower or higher temperature services, the bonnet arrangement/installation could be changed to ensure that a conventional graphite packing system works for the desired temperature and pressure. The heat dissipation length has two functions: to clear the lagging and to have sufficient length outside the lagging to dissipate the heat so the graphite-packing skin temperature remains in an operable range.
Extended bonnet lengths for low temperature are covered in standards such as BS 6364 and MSS SP-134 that address low-temperature/cryogenic applications. Shell MESC SPE 77/212 is a specification that covers “valves in high-temperature services.” This specification suggests that for valves for temperatures above 450°C (842°F) the length of the extension shall be sufficient to maintain the stem packing at a temperature less than or equal to 400°C (750°F) to minimize the potential for the oxidization of graphite, which can affect the valve’s performance.
Other bonnet arrangements
- A lantern ring could be used in conjunction with compression packings for cooling the packing or acting as an injection chamber for a sealant or as a water sealing connection for vacuum service.
- Jacketed valves can be installed for cooling or heating. When a cooling medium is circulated, the jacketing is concentrated around the bonnet so the packing temperature is reduced sufficiently.
- In high-temperature services, it can be effective to install the bonnet below the valve. If the bonnet is below the valve, no convection occurs and the heat from process fluid is transferred by conduction in the bonnet wall only.
- Special cooling arrangements such as a stem cooling arrangement could be used where liquid sodium or water can be used to cool the valve stem and consequently the sealing system.
Bellows-sealed gate & globe valve
A bellows-sealed valve is designed with a metal bellows that expands or contracts with the linear stroke of the valve while providing a solid, permanent barrier between the fluid medium in the body and any potential leak paths to the atmosphere. The purpose of a bellows seal is to provide a metal barrier between the stem at its point of entry through the pressure boundary and the process fluid within the valve to eliminate stem leakage.
Alternate & complementary sealing materials
At present, graphite is a popular packing material for all general sealing applications in the form of gasket and packing rings. The service temperature limit of graphite packing rings has precluded its application for high-temperature services above 450°C (842°F) in oxidizing atmospheres. Conventionally in such high-temperature applications, bonnet extensions are provided in valves that ensure the temperature at the packing remains below 450°C (842°F). However, the use of bonnet extensions becomes prohibitive in some cases where end users want compact piping layouts.
The packing material/configuration selected would have to be stable at high temperatures without any deterioration in the chemical properties and still achieve sealability at elevated temperatures. Following are the sealing materials considered for high-temperature applications:
- High-purity graphite (greater than 99 percent) with oxidation inhibitor can be used at temperatures up to 550°C (1,022°F).
- Packings made from Vermiculite, a natural mineral that expands with the application of heat, are resistant to temperatures up to 1,050°C (1,922°F) but are hygroscopic in nature.
- Packings made from silica fibers are resistant to high temperature and pressure, but silica packings tend to harden at high temperatures.
- Glass wool-based packings are porous but resistant to temperatures up to 1,000°C (1,832°F).
- Mica-based packings can be used at temperatures up to 1,000°C (1,832°F), are partly hygroscopic and can be used for low pressures only.
Spring-energized metal seals made from Inconel can work up to 700°C (1,292°F), though the finish and hardness requirements of the stuffing box and stem could be demanding.
In valves, the concept of independent sealing barriers has been in use for a long time. Primary and secondary stem seals were first used in plug valves and later in ball valves where the primary seal is responsible for leak tightness while the secondary seal is merely a fire-safe seal when used in process applications. However, with the demands for low-emission valves, or Low-E valves, stem seal configurations have changed where a number of independent barriers are used in conjunction to create a leak-tight seal. Examples of multibarrier seals are given in Table 2.
- McJones, S. & Sobilo, R. (Sept. 26, 2014). How a Refinery Significantly Reduced Fugitive Emissions.
- Ultra Supercritical Turbines – Steam Oxidation, DOE/ARC-2004-064
- API RP 615, Valve Selection Guide, Second Edition
K.S. Patil is head – product design, research & development, at L&T Valves Limited. He holds a Bachelor of Technology (B.Tech) degree in mechanical engineering. He has more than 30 years of experience in the valve industry. He is responsible for design and development of industrial valves of different types. He holds nine patents related to valves.
Jaisingh Jadhav is senior deputy general manager – business development and heads the North American business of L&T Valves Limited. He holds a Bachelor of Engineering degree in mechanical engineering and has worked with L&T for 24 years.
Ram Viswanathan has 12 years of experience at L&T Valves in different engineering roles in valves design, research and development and reliability engineering. He holds a Bachelor of Engineering (B.E.Hons.) degree in mechanical engineering, is professionally registered as a CFSP (Exida) and as an Incorporated Engineer (ImechE). |
Crude oil is the mixture of petroleum liquids and gases (together with associated impurities) pumped out of the ground by oil wells.
It is described by the location of its origin (e.g., "western Texas" or "Brent") and often by its relative weight or viscosity (light, intermediate, or heavy); it may also be referred to as "sweet", which means it contains relatively little sulfur (in the form of the gas H2S) and requires less refining, or "sour", which means it contains substantial sulfur and requires more refining. The presence of H2S also adds considerably to the production costs as this highly toxic gas cannot simply be emitted into the atmosphere. Usually, it is either stored and then disposed of, or pumped back in the top of the oil reservoir where it expands and helps "push" remaining oil towards producing wells (this is referred to as gas reinjection).
The price of oil fluctuates quite widely in response to crises or recessions in major economies, because any economic downturn reduces the demand for oil. On the supply side the OPEC cartel uses its influence to stabilise or raise oil prices. In the early spring of 1999, the average price of around US$14 per barrel (less than US $0.15 per liter), meant crude oil was the second cheapest liquid in the world. Currently (March 2003), Brent crude stands at $33 per barrel.
Crude oil, like coal and natural gas, is generally held to be the product of compression of ancient vegetation over geological timescales. A few scientists, notably Thomas Gold, have suggested other, abiogenic, theories for the origins of crude oil. |
Tremendous wind energy resources exist in India and abroad. For example, it is estimated that the wind resources in the Pacific Northwest of the United States could theoretically provide the entire world’s power supply. Other places around the world also have significant also.
Wind Power is now considered as one of the best of the Alternative Energy strategies. The cost per unit is now becoming competitive with traditional coal fueled sources of energy, although it cannot yet match the present low cost of natural gas (LPG). Further innovations in wind energy power generation technology, along with the rising cost of fossil fuels is driving wind energy production costs
Near the tropics, ascending moist air rises and moves towards the poles. As it moves away from the equator, it cools and loses moisture (causing rain in the process).
As the air descends, it heats up and retains more moisture, causing the cycle to sustain itself.
This process repeats over different latitude ranges, as well as at the poles. The addition of the earth’s rotation causes the following overall wind pattern to occur.
Windspeed vs. Time
Average wind speed is the most important element of wind power design. However, wind speed varies dramatically over time. It is constantly changing direction and speed, even on a second to second basis. For this reason, data logging over an extended duration of time is essential to smooth out the irregularities in speed and direction. This gives a more accurate profile of the wind at a particular site.
Wind Speed vs. Height Estimations
The wind speed varies greatly with height and the surface characteristics. Many atmospheric measurements are taken at the standardized height of 10 meters. This is just above the tops of the trees in many places.
There are two general methods of estimating wind speed versus height.
v(h2)/ v(h1) = (h2/h1)^ (1/7)
where h1 and h2 are two different heights (h2 > h1).
This is the simpler formula, which is mostly valid for smooth terrain, such as near water. There are some particular cases in which this formula is not valid, such as valleys in which wind are accelerated.
There is a more complicated but also more accurate wind speed estimation formula, which is based on the natural logarithm.
v2/v1 = (ln(h2/z0))/(ln(h1/z0)),
where z0 is the roughness factor, which has the following coefficients according to the type of surface the wind is flowing over:
· Class 0 – Water – 0.0002 (meter)
· Class 1 – Open land – 0.03 (meter)
· Class 2 – Farmland – 0.10 (meter)
· Class 3 – Urban and obstructed rural – 0.4 (meter)
It should be noted that the lengths shown above do not correspond to the lengths of physical objects.
Combining the facts that wind power increases with the cube of the wind speed (covered in the Wind Turbine Virtual Lab) and the present result, that the wind speed increases substantially with height, yields the result that the power output of a turbine increases dramatically when the turbine height above ground increases. Again, this depends on the location and wind speed profile both in time and height.
This is the reason that the modern turbines are placed so high above the ground, in which the turbine hub height is as much as 70 meters.
An additional benefit is that large wind speed spikes can significantly contribute to the average power output. However, the turbine design must be optimized to handle these spikes.
Practical Wind Speed Assessment Tips
Wind speed knowledge is important for turbine design, efficiency, and optimization. Because of the large importance of knowing the wind speed profile for selecting the turbine to be installed, it is suggested that measurements be taken at different heights for 1 year. The various heights frequently monitored are the wind turbine hub height, and the hub height plus and minus the blade length.
Optimum wind power placement is above flat terrain with the smoothest possible surface before the turbine (from the wind’s perspective). Near Ocean bodies are typically the best.
The first step in assessing a site for suitability for wind power production is to analyze the available wind resources. Wind energy varies greatly with location and height. A wind sensor, called an anemometer, measures the velocity and direction of the wind stream being monitored.
While wind resource data is available in various online databases, these databases only cover macro solar data, for example data gleaned from orbiting satellites, or data from specific monitoring points, such as from installed metrological weather stations, which do not always provide information relevant to the particular site that is being assessed. For example, buildings, trees, height, nearby water bodies and many other factors may affect the locally available wind power.
This Virtual Lab experiment is designed to teach the process of actual, applied wind energy site assessment using modern data acquisition systems and the relevant data post-processing techniques. At the end of this experiment, the student will have the necessary skills to analyze wind data for a real-world assessment of a site for wind energy production.
Kinetic Energy and Wind Energy Density of an Airstream
The Kinetic Energy of a stream of wind is given by:
KE = ½ mv2 = ½ ρ (AΔx) v2,
where ρ is the air density, A is the cross section area of the stream, and Δx is in the direction of wind movement (mass = density * volume = density * area * delta length = ρAΔx).
Then the Wind Energy Density, Pw, is given by the derivative of KE with respect to time, divided by the Area
Pw = dKE/dt * (1/A) = ½ ρ A (Δx/dt) v2 *(1/A)
Pw = ½ ρ (dx/dt) v2 = ½ ρ v v2 = ½ ρ v3
Pw = ½ ρ v3 (watts/meter2)
Also, it’s important to note that the density of the air changes with temperature
ρ = P/(RT,) (kg/m3)
where P = Pressure, R = the Gas Constant, and T = Temperature. The air density, ρ, is frequently assumed to be constant at sea level and equal to 1.2929 kg/m3.
The Practical Aspects of Wind Energy Density
As shown above, the Wind Energy Density is proportional to the cube of the wind speed. Therefore, a 3-fold increase in wind speed corresponds to a 27-fold increase in Wind Energy Density. This means the wind speed peaks contribute more to the average wind energy density than the dips in wind speed take away from it.
Similar phenomena as with cars – doubling the speed requires increasing the HP by a factor of 8.
This is why racing cars may have so much more power (800 hp +), 4 to 8 times as much as a normal car, but only go 2 or 3 timesfaster. (Other factors affecting the equation are reduced mass, altered aerodynamics, and the need for high acceleration capability).
The previous equation, Pw = ½ ρ v3 (watts/meter2), means that the available power generating capacity of the wind increases greatly as wind speed increases.
A more useful measurement is the average wind energy received over a given unit area over a given time frame. Average wind energy values are very important to gather, as wind energy measurements change dramatically over the short term, but are reliable when averaged over the relevant longer time frame. A proper assessment of the energy collected is essential to determining the type and size of wind turbine to be installed. The installation of an improperly selected wind turbine is not only expensive, but it may not rotate under the given wind conditions and will not generate energy.
To determine the power an installed wind turbine will produce, it is necessary to multiply the wind energy density times the swept area of the rotor times a theoretical maximum limit of energy extraxtion called the Betz limit. The Betz limit is 0.59 and is dimensionless. For a proof of the Betz limit, please see the Webliography in the References tab.
Pi = ½ A ρ v3 (Watts)
where A is the swept area of the wind turbine; A = 3.14r2 , where r is the length of the radius of the turbine rotor.
Due to other losses, as explained in other Wind Energy Virtual Labs, the maximum power produced by a turbine is normally much less than this theoretical limit. |
A Private Limited Company is one of the most common business entity in India. In such companies, the Directors play an important role during the company incorporation process and the post-incorporation process. This article will cover all the aspects of being a director in a private limited company.
Definition of Director:
As per Companies Act, 2013 defines the term “Director” as someone who is an appointment to the company board. The Board of Directors means a group of those individuals who are elected by the shareholders of a company in order to manage the affairs of the company. Since a company is an artificial legal person which is created by law, it is necessary for it to act only through the agency of natural persons. It can only act through human beings, and it is the Directors with the help of whom mainly the company acts. Therefore, the management of a company is entrusted to a body of persons who are called the “Board of Directors”.
Another definition of a Director is someone who administers, controls or directs something, especially a member of a commercial company; or one who supervises, controls or manages; or a person who is elected by the shareholders of a company in order to direct the company’s policies; the person appointed or elected according to law, or who are authorized to manage and direct the affairs of a company.
However, for a person to become a director at the time of private limited company registration, he/she is required to have a Director Identification Number (DIN Number). DIN Number can be obtained from any person who is over the age of 18 by applying to the DIN Cell.
DIN is a unique 8- digit Director Identification Number. This number is allotted by the Central Government to any person who is going to be a Director or is an existing director of a company and obtaining a DIN is a very easy task. The DIN number has a lifetime validity. With the help of Direct Identification Number(DIN), the details of the directors are maintained in the database.
A company can have different types of directors such as:
A “Managing Director” refers to a Director who, by the virtue of the Articles of Association of the company or by an agreement with the company or by a resolution passed at its annual general meeting, or by its Board of Directors, is entrusted with the substantial powers of the management of the affairs of the company.
Whole-time Director or Executive Director
An Executive Director or a whole-time Director is someone who is in full-time employment of the company.
An “Ordinary Director” refers to a simple Director who attends the Board meetings of a company and he also participates in the matters that are put before the Board of Directors. These Directors are neither the whole-time Directors or Managing Directors.
The Maximum and the Minimum Number of Directors in a Private Limited Company.
Only an Individual (living person) can be appointed as a Director of a Company. A body corporate or a business entity cannot be appointed as a Director of a Company. A company can, however, have a maximum of fifteen Directors and it can be increased further by passing a special resolution.
Thus the Minimum Number of Directors that are required n different types of Companies are as follows:
- For a Private Limited Company – Minimum two Directors
- For a Limited Company – Minimum three Directors
- For One Person Company – Minimum one Director
However, in the recent years, there has been a change according to which, the private limited company who are having a paid-up share capital of Rs.100 crore rupees or more or a turnover of Rs.300 crores or more are required to appoint at least one woman Director. But There is no women Director requirement for a private limited company registration. |
In ideal cases, you place the pump below the liquid level before you begin pumping. In this case, air pressure and gravity ensure that the pump continually fills to keep air from entering the pump’s suction line.
In most everyday applications, however, such as emptying underground storage tanks, you need to have the pump above the liquid, which is a practical approach. Nonetheless, when starting up, you will have to evacuate or discharge the air in the pump’s suction line before you begin pumping out the liquid.
Since manufacturers build most pumps to move liquids, discharging the air becomes a complicated issue to address. To solve this, you have two options: secondary pumps and self-priming pumps.
One of the most common solutions for evacuating suction lines is using secondary pumps. These use non-return valves and evacuation tanks to keep fluids from draining back into the pump’s suction line once you stop pumping.
However, since this solution requires you use extra piping and accessory equipment, it turns out to be expensive to use, especially for short-term projects.
These are a more economical option than using secondary pumps to evacuate air in suction lines. With these pumps, discharging air begins automatically, and without accessory equipment, right when you begin pumping.
Typical examples of these pumps are the positive displacement pump types, including the diaphragm, rotary gear and vane pumps, which use close tolerance to keep fluids from returning to suction lines.
Manufacturers design and build pumps for different applications, most of which are to discharge liquids. However, in most cases, pumps cannot perform this function, at least not effectively, if there is the presence of air in the pump’s suction line.
Discharging this air every time you are using the pump can be not only time-consuming but also labour-intensive. Thanks to self-priming pumps, however, you can handle this problem without having to involve the tedious processes of evacuating the air in a typical pump’s suction line. |
By Rachel Moore
The Junior Achievement Young Enterprise (JA-YE) recently received a USD 96,000 grant from the Monsanto Fund for their European programme “Fostering Innovation-Driven Entrepreneurship.”
JA-YE is an organisation that provides young people with the necessary experiences, skills, understanding and perspective to succeed in a global economy. The grant will help fund an already-established project that has been running for 10 years.
The START UP Programme is the main project of JA-YE, and “uses hands-on experiences to help young people understand the economics of life. In partnership with business and educators, JA-YE brings the real world to students and opens their minds to their potential.”
According to their website, the goal of this project is “to promote long-term growth and job creation by significantly improving the conditions for innovation-driven entrepreneurship, creating an immediate and sustainable impact.”
JA-YE does this by connecting program participants with key entrepreneurial leaders in the community, which creates a network of mentors and mentees who are unified in European entrepreneurship.
Every year, JA-YE work with the World Economic Forum (WEF) to sponsor the European Enterprise Challenge, a competition where students present the projects they have been working on at their partnered start-up companies.
A panel of judges acts as “potential investors” and decides how much they would be willing to invest in each start-up company. The team with the highest amount of investment wins the Challenge.
The 2014 Challenge’s press release and complete list of winners can be seen here.
For more information about Monsanto Fund’s international programmes and how to apply, visit http://www.monsantofund.org/grants/international/. |
Corrosion of aluminium surfaces
Direct decomposition of an aluminium surface is called corrosion or corrosive attack. The most common types of corrosion are:
- Occurs when metals break down each other.
- If the aluminium comes into contact with a precious metal (such as copper, zinc and certain types of steel), aluminium will be broken down.
- It is therefore problematic for example to combine aluminium and galvanised steel because galvanised steel is covered with zinc, which is more precious than aluminium. Hence, it is aluminium and not the galvanised surface that will be broken down.
- Examples of typical damage in connection with galvanic corrosion:
- Pitting corrosion most often occurs as local damage to the aluminium surface and usually results in aesthetic damage rather than functional damage.
- Pitting corrosion can occur if the aluminium is in a very damp environment where there are often salts present.
- For example, it occurs in common dirt and debris as well as in environments where the water cannot be led away from the metal.
- Examples of typical damages in connection with galvanic corrosion:
Aluminium in maritime environments
If aluminium is to be used in maritime environments and thus must be resistant to seawater in order to prevent corrosion, it is recommended pursuant to standard EN13195: 2009 to use a large proportion of 5000 and 6000 series alloys for maritime projects. Alloys 5083, 5754, 6060 and 6082, among other things.
For maritime aluminium structures it is recommended further, pursuant to EN1999-1-1 (Eurocode 9) to use screws, bolts and other connection elements in the material A4 316 - acid- and rustproof. If these are not used, there is risk of galvanic corrosion.
Acids and bases are damaging to aluminium
The optimal pH value for the oxide layer is in the range 4 to 9. Acids and bases break down the oxide layer, thereby opening up the raw aluminium surface. If aluminium is exposed to very strong acid or alkaline environments outside the pH range 4 to 9, violent corrosion will occur in the form of metal pitting.
Bases break down the aluminium faster than acids - for example concentrated caustic soda reacts so violently with aluminium that it can start to boil. The reaction is powerful and causes the temperature to rise, and the higher the temperature, the faster the reaction is. Thus, the reaction between aluminium and the base is self-accelerating and can accelerate violently. An example of a common alkaline material is concrete, which normally has a pH value of between 12.5 and 13.5. (Source:https://ing.dk/artikel/betonens-ph-vaerdi-falder-76466). Concrete can therefore cause damage to the aluminium surface in the form of pitting. |
SpaceX, fresh from its successful recovery of a rocket stage following the launch of 11 satellites on Dec. 21, plans to attempt a risky ocean drone ship recovery on Jan. 17. The recovery would follow the launching of a NASA satellite, known as Jason-3, that will be used to monitor global sea levels.
The launch aboard the SpaceX Falcon 9 rocket is scheduled for 1:42 p.m. ET on Jan. 17 from Vandenberg Air Force Base in California, with a backup launch opportunity occurring on the morning of Jan. 18, according to NASA.
SpaceX has tried and failed to recover a stage of its Falcon 9 rocket on a drone ship (which is basically a barge with a landing platform) at least three times before. Its test landing on dry land, which is an arguably easier accomplishment considering the lack of waves at the landing pad in Cape Canaveral, Florida, was successful and marked a new era in spaceflight.
The launch and landing was the first time any company has landed an orbital rocket segment back on Earth following a launch to space. Elon Musk, the billionaire founder and CEO of SpaceX, sees reusable rockets as the key to dramatically lowering the cost of accessing space, and potentially paving the way for an eventual Mars mission.
At the moment, it costs about $60 million to launch a payload to space with a Falcon 9 rocket, but the fuel itself costs about $200,000, according to Musk. By vertically landing the rocket on a pad back on Earth, rocket companies can re-use the stages for other missions instead of allowing the expensive hardware to re-enter the atmosphere, burning up along the way.
Reusing rocket stages could bring the cost of spaceflight down by orders of magnitude.
"It's much like refueling a 747 or something," Musk said during a session at the American Geophysical Union's annual meeting in San Francisco in December.
— Mashable News (@MashableNews) December 22, 2015
Musk is not alone in his vision of reusing rocket stages in order to lower the cost of space exploration. Blue Origin, the spaceflight company owned by Amazon's Jeff Bezos, also succeeded in recovering a rocket stage after a launch in November 2015. However, Blue Origin's rocket did not make it as high as SpaceX's did.
Sea level rise satellite
The 1,100-pound Jason-3 satellite is meant to improve scientists' understand of ocean currents, wave heights and changes in global sea level. It is actually the fourth in a series of missions designed to monitor long-term sea level rise, beginning in 1992.
Since 1992, global sea level rise has accelerated, with 2.8 inches of global average sea level rise observed during the period. Oceans are rising mainly due to warming ocean temperatures, which causes water to expand, and melting polar ice caps, which adds more water to the sea.
According to the National Oceanic and Atmospheric Administration (NOAA), the satellite will fly in a low Earth orbit that will allow it to monitor 95% of the Earth's ice-free oceans every 10 days.
“The rate of sea-level rise is an important indicator of climate change happening around the world,” said Laury Miller, NOAA’s Jason-3 program scientist and chief of NOAA’s Laboratory for Satellite Altimetry, in a statement. “We are already seeing significant impacts on coastal regions globally, including more frequent flooding events along the coastal United States.”
The satellite will also help scientists predict fluctuations in hurricane intensity by examining ocean heat content.
“The ocean heat content from satellite altimeters can reduce the error of NOAA’s hurricane intensity forecast models by as much as 20% in some instances,” Miller said in a statement.
Jason-3 is an international cooperative mission in which NOAA is partnering with NASA, CNES (the French Space Agency) and the European satellite agency known as EUMETSAT. Jason-3 will eventually replace Jason-2, which was launched in 2008 and is still functioning. |
Subsets and Splits